My research centers around combinatorial semantic cognition. I'm interested in how we learn and represent concepts, and how we can flexibly and quickly assemble them into new configurations to help us with a wide range of tasks, like solving problems, or understanding a work of fiction.

I use language comprehension as a window into this generative capacity and its neural bases. As it is an interdisciplinary endeavour, I try my best to do my homework, grounding my work in linguistic theory, attending to constraints from neuropsychology, and borrowing designs and paradigms from cognitive science and psycholinguistics.

More recently, I began seeking explanations in computational terms, using tools from deep learning to test intuitions about our semantic competence and to better characterise its computational bases. My colleagues and I have also recently argued that AI can benefit from learning from animal cognition research, and we outlined a way forward in a comparative approach to the evaluation of large language models.