Publications
In Progress
Grusha Prasad & Tal Linzen. SPAWNing Structural Priming Predictions from a Cognitively Motivated Parser.[arxiv]2024
Grusha Prasad* & Forrest Davis*. Training an NLP Scholar at a Small Liberal Arts College: A Backwards Designed Course Proposal. Proceedings of the 6th Workshop on Teaching NLP. [acl anthology]
Kuan-Jung Huang, Suhas Arehalli, Mari Kugemoto, Christian Muxica, Grusha Prasad, Brian Dillon & Tal Linzen. Large-scale benchmark yields no evidence that language model surprisal explains syntactic disambiguation difficulty. Journal of Memory and Language. [link]
2023
Aryaman Chobey, Oliver Smith, Anzi Wang & Grusha Prasad. Can training neural language models on a curriculum with developmentally plausible data improve alignment with human reading behavior? In Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning (CoNLL). [arxiv] [acl anthology]
2022
Grusha Prasad. Towards characterizing incremental structure building during sentence comprehension. PhD Dissertation, Johns Hopkins University. [pdf]
2021
Shauli Ravfogel*, Grusha Prasad*, Tal Linzen & Yoav Goldberg. Counterfactual Interventions Reveal the Causal Effect of Relative Clause Representations on Agreement Prediction. Proceedings of the 25th Conference on Computational Natural Language Learning (CoNLL).[arxiv]*equal contribution.
Grusha Prasad, Yixin Nie, Mohit Bansal, Robin Jia, Douwe Kiela & Adina Williams. To what extent do human explanations of model behavior align with actual model behavior? Proceedings of the 4th Workshop on the Analysis and Interpretation of Neural Networks for Natural Language Processing (BlackBox NLP). [arxiv]
Grusha Prasad and Tal Linzen. Rapid Syntactic Adaptation in Self-Paced Reading: Detectable, but Only With Many Participants. Journal of Experimental Psychology: Learning, Memory and Cognition [link] [pdf] [OSF]
Douwe Kiela, Max Bartolo, Yixin Nie, Divyansh Kaushik, Atticus Geiger, Zhengxuan Wu, Bertie Vidgen, Grusha Prasad , Amanpreet Singh, Pratik Ringshia, Zhiyi Ma, Tristan Thrush, Sebastian Riedel, Zeerak Waseem, Pontus Stenetorp, Robin Jia, Mohit Bansal, Christopher Potts, Adina Williams. Dynabench: Rethinking Benchmarking in NLP. Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. [pdf]
2019
Grusha Prasad, Marten van Schijndel and Tal Linzen. Using Priming to Uncover the Organization of Syntactic Representations in Neural Language Models. Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 66–76 [pdf] [video] [slides]