Plug and play autoencoders for conditional text generation F Mai, N Pappas, I Montero, NA Smith, J Henderson arXiv preprint arXiv:2010.02983, 2020 | 30 | 2020 |
Sentence Bottleneck Autoencoders from Transformer Language Models I Montero, N Pappas, NA Smith arXiv preprint arXiv:2109.00055, 2021 | 21 | 2021 |
How much does attention actually attend? questioning the importance of attention in pretrained transformers M Hassid, H Peng, D Rotem, J Kasai, I Montero, NA Smith, R Schwartz arXiv preprint arXiv:2211.03495, 2022 | 18 | 2022 |
Pivot through english: Reliably answering multilingual questions without document retrieval I Montero, S Longpre, N Lao, AJ Frank, C DuBois arXiv preprint arXiv:2012.14094, 2020 | 2 | 2020 |