Selected Papers on Whether Transformers Converge with Psychometric Data

In this list, I collect papers bearing on the question, whether transformer models converge with human cognition as captured by psychometric data (eye-tracking, fMRI, etc.).

Core Papers

  • Caucheteux, C., Gramfort, A., & King, J.-R. (2022). Deep language algorithms predict semantic comprehension from brain activity. Scientific Reports, 12(1), Article 1. https://doi.org/10.1038/s41598-022-20460-9
  • Caucheteux, C., & King, J.-R. (2022). Brains and algorithms partially converge in natural language processing. Communications Biology, 5(1), Article 1. https://doi.org/10.1038/s42003-022-03036-1
  • Merkx, D., & Frank, S. L. (2021). Human Sentence Processing: Recurrence or Attention? Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, 12–22. https://doi.org/10.18653/v1/2021.cmcl-1.2
  • Michaelov, J. A., Bardolph, M. D., Coulson, S., & Bergen, B. K. (2021). Different kinds of cognitive plausibility: Why are transformers better than RNNs at predicting N400 amplitude? ArXiv:2107.09648 [Cs]. http://arxiv.org/abs/2107.09648
  • Oh, B.-D., Clark, C., & Schuler, W. (2021). Surprisal Estimators for Human Reading Times Need Character Models. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 3746–3757. https://doi.org/10.18653/v1/2021.acl-long.290
  • Schrimpf, M., Blank, I. A., Tuckute, G., Kauf, C., Hosseini, E. A., Kanwisher, N., Tenenbaum, J. B., & Fedorenko, E. (2021). The neural architecture of language: Integrative modeling converges on predictive processing. Proceedings of the National Academy of Sciences, 118(45), e2105646118. https://doi.org/10.1073/pnas.2105646118
  • Wilcox, E. G., Gauthier, J., Hu, J., Qian, P., & Levy, R. (2020). On the Predictive Power of Neural Language Models for Human Real-Time Comprehension Behavior (arXiv:2006.01912). arXiv. https://doi.org/10.48550/arXiv.2006.01912

See also my post on this issue.