In Progress

Grusha Prasad & Tal Linzen. SPAWNing Structural Priming Predictions from a Cognitively Motivated Parser.

2024

Kuan-Jung Huang, Suhas Arehalli, Mari Kugemoto, Christian Muxica, Grusha Prasad, Brian Dillon & Tal Linzen. Large-scale benchmark yields no evidence that language model surprisal explains syntactic disambiguation difficulty. [link]

2023

Aryaman Chobey, Oliver Smith, Anzi Wang & Grusha Prasad. Can training neural language models on a curriculum with developmentally plausible data improve alignment with human reading behavior? In Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning (CoNLL). [arxiv] [acl anthology]

Grusha Prasad & Tal Linzen. Studying relative clause representations: a novel parsing model and priming paradigm. Presented at the 36th Annual Conference on Human Sentence Processing. [slides]

2022

Grusha Prasad. Towards characterizing incremental structure building during sentence comprehension. PhD Dissertation, Johns Hopkins University. [pdf]

Kuan-Jung Huang, Suhas Arehalli, Mari Kugemoto, Christian Muxica, Grusha Prasad, Brian Dillon & Tal Linzen. SPR mega-benchmark shows surprisal tracks construction-but not item-level difficulty Presented at the 35th Annual Conference on Human Sentence Processing.

2021

Shauli Ravfogel*, Grusha Prasad*, Tal Linzen & Yoav Goldberg. Counterfactual Interventions Reveal the Causal Effect of Relative Clause Representations on Agreement Prediction. Proceedings of the 25th Conference on Computational Natural Language Learning (CoNLL).[arxiv]*equal contribution.

Grusha Prasad, Yixin Nie, Mohit Bansal, Robin Jia, Douwe Kiela & Adina Williams. To what extent do human explanations of model behavior align with actual model behavior? Proceedings of the 4th Workshop on the Analysis and Interpretation of Neural Networks for Natural Language Processing (BlackBox NLP). [arxiv]

Grusha Prasad and Tal Linzen. Rapid Syntactic Adaptation in Self-Paced Reading: Detectable, but Only With Many Participants. Journal of Experimental Psychology: Learning, Memory and Cognition [link] [pdf] [OSF]

Douwe Kiela, Max Bartolo, Yixin Nie, Divyansh Kaushik, Atticus Geiger, Zhengxuan Wu, Bertie Vidgen, Grusha Prasad , Amanpreet Singh, Pratik Ringshia, Zhiyi Ma, Tristan Thrush, Sebastian Riedel, Zeerak Waseem, Pontus Stenetorp, Robin Jia, Mohit Bansal, Christopher Potts, Adina Williams. Dynabench: Rethinking Benchmarking in NLP. Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. [pdf]

2020

Grusha Prasad and Tal Linzen. Rapid syntactic adaptation in SPR: detectable, but requires many participants. Presented at the 33nd Annual CUNY Conference on Human Sentence Processing [slides]

2019

Grusha Prasad, Marten van Schijndel and Tal Linzen. Using Priming to Uncover the Organization of Syntactic Representations in Neural Language Models. Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 66–76 [pdf] [video] [slides]

Grusha Prasad and Tal Linzen. How much harder are hard garden path sentences than easy ones? Presented at the 41st Annual Conference of the Cognitive Science Society. [poster]

Grusha Prasad, Marten van Schijndel and Tal Linzen. Using syntactic priming to investigate how recurrent neural networks represent syntax Presented at 32nd Annual CUNY Conference on Human Sentence Processing [poster]

Grusha Prasad and Tal Linzen. Do self-paced reading studies provide evidence for rapid syntactic adaptation? Presented at 32nd Annual CUNY Conference on Human Sentence Processing [poster]

2018, 2017

Grusha Prasad, Mark Feinstein and Joanna Morris. The P600 for singular 'they': How the brain reacts when John decides to treat themselves to sushi. Presented at 31st Annual CUNY Conference on Human Sentence Processing and 58th Annual Meeting of the Psychonomic Society Psychonomics. [poster] [presentation] [manuscript]