In a recent study published in Nature Communications Psychology, researchers from NYU led by Associate Professor of Biomedical Engineering at NYU Tandon and Neurology at NYU Grossman School of Medicine Adeen Flinker and Postdoctoral Researcher Adam Morgan used high-resolution electrocorticography (ECoG) to investigate how the human brain assembles sentences from individual words. While much of our understanding of language production has been built on single-word tasks such as picture naming, this new study directly tests whether those insights extend to the far more complex act of producing full sentences.
Ten neurosurgical patients undergoing epilepsy treatment participated in a set of speech tasks that included naming isolated words and describing cartoon scenes using full sentences. By applying machine learning to ECoG data — recorded directly from electrodes on the brain’s surface — the researchers first identified the unique pattern of brain activity for each of six words when they were said in isolation. They then tracked these patterns over time while patients used the same set of words in sentences.
The findings show that while cortical patterns encoding individual words remain stable across different tasks, the way the brain sequences and manages those words changes depending on the sentence structure. In sensorimotor regions, activity closely followed the spoken order of words. But in prefrontal regions, particularly the inferior and middle frontal gyri, words were encoded in a completely different way. These regions encoded not just what words patients were planning to say, but also what syntactic role it played — subject or object — and how that role fit into the grammatical structure of the sentence.
The researchers furthermore discovered that the prefrontal cortex sustains words throughout the entire duration of passive sentences like “Frankenstein was hit by Dracula.” In these more complex types of sentences, both nouns remained active in the prefrontal cortex throughout the sentence, even as the other one was being said. This sustained, parallel encoding suggests that constructing syntactically non-canonical sentences requires the brain to hold and manipulate more information over time, possibly recruiting additional working memory resources.
Interestingly, this dynamic aligns with a longstanding observation in linguistics: most of the world’s languages favor placing subjects before objects. The researchers propose that this could be due, in part, to neural efficiency. Processing less common structures like passives appears to demand more cognitive effort, which over evolutionary time could influence language patterns.
Ultimately, this work offers a detailed glimpse into the cortical choreography of sentence production and challenges some of the long-standing assumptions about how speech unfolds in the brain. Rather than a simple linear process, it appears that speaking involves a flexible interplay between stable word representations and syntactically driven dynamics, shaped by the demands of grammatical structure.
Alongside Flinker and Morgan, Orrin Devinsky, Werner K. Doyle, Patricia Dugan, and Daniel Friedman of NYU Langone contributed to this research. It was supported by multiple grants from the National Institutes of Health.
Journal
Communications Psychology
Method of Research
Computational simulation/modeling
Subject of Research
People
Article Title
Decoding words during sentence production with ECoG reveals syntactic role encoding and structure-dependent temporal dynamics
Article Publication Date
3-Jun-2025