On the opportunities and risks of foundation models R Bommasani, DA Hudson, E Adeli, R Altman, S Arora, S von Arx, ... arXiv preprint arXiv:2108.07258, 2021 | 2906 | 2021 |
Quality at a glance: An audit of web-crawled multilingual datasets J Kreutzer, I Caswell, L Wang, A Wahab, D van Esch, N Ulzii-Orshikh, ... Transactions of the Association for Computational Linguistics 10, 50-72, 2022 | 208* | 2022 |
Learning music helps you read: Using transfer to study linguistic structure in language models I Papadimitriou, D Jurafsky arXiv preprint arXiv:2004.14601, 2020 | 68* | 2020 |
Deep subjecthood: Higher-order grammatical features in multilingual BERT I Papadimitriou, EA Chi, R Futrell, K Mahowald arXiv preprint arXiv:2101.11043, 2021 | 38 | 2021 |
When classifying grammatical role, BERT doesn't care about word order... except when it matters I Papadimitriou, R Futrell, K Mahowald arXiv preprint arXiv:2203.06204, 2022 | 31* | 2022 |
Oolong: Investigating what makes transfer learning hard with controlled studies Z Wu, A Tamkin, I Papadimitriou Proceedings of the 2023 Conference on Empirical Methods in Natural Language …, 2023 | 13* | 2023 |
Injecting structural hints: Using language models to study inductive biases in language learning I Papadimitriou, D Jurafsky Findings of the Association for Computational Linguistics: EMNLP 2023, 8402-8413, 2023 | 9* | 2023 |
Mission: Impossible language models J Kallini, I Papadimitriou, R Futrell, K Mahowald, C Potts arXiv preprint arXiv:2401.06416, 2024 | 3 | 2024 |
Multilingual BERT has an accent: Evaluating English influences on fluency in multilingual models I Papadimitriou, K Lopez, D Jurafsky arXiv preprint arXiv:2210.05619, 2022 | 3 | 2022 |
Separating the Wheat from the Chaff with BREAD: An open-source benchmark and metrics to detect redundancy in text I Caswell, L Wang, I Papadimitriou arXiv preprint arXiv:2311.06440, 2023 | | 2023 |
Multilingual BERT, ergativity, and grammatical subjecthood I Papadimitriou, EA Chi, R Futrell, K Mahowald Society for Computation in Linguistics 4 (1), 2021 | | 2021 |
Do large language models use grammar to solve natural language tasks? I Shah, I Papadimitriou, R Futrell, K Mahowald | | |
Accessibility-Based Constraints on Morphosyntax in Corpora of 54 Languages K Mahowald, I Papadimitriou, D Jurafsky, R Futrell | | |