When the brain reads or decodes a sentence in English or Portuguese, its neural activation patterns are the same, researchers at Carnegie Mellon University have found.
The study is the first to demonstrate that different languages have similar neural signatures for describing events and scenes. By using a machine-learning algorithm, the research team was able to understand the relationship between sentence meaning and brain activation patterns in English and then recognize sentence meaning based on activation patterns in Portuguese.
The findings can be used to improve machine translation, brain decoding across languages, and, potentially, second language instruction. Marcel Just, professor of psychology at Carnegie Mellon, explains:
“This tells us that, for the most part, the language we happen to learn to speak does not change the organization of the brain.
Semantic information is represented in the same place in the brain and the same pattern of intensities for everyone. Knowing this means that brain to brain or brain to computer interfaces can probably be the same for speakers of all languages.”
For the study, 15 native Portuguese speakers (eight were bilingual in Portuguese and English) read 60 sentences in Portuguese while in a functional magnetic resonance imaging (fMRI) scanner. A computational model developed at Carnegie Mellon was able to predict which sentences the participants were reading in Portuguese, based only on activation patterns.
The computational model uses a set of 42 concept-level semantic features and six markers of the concepts’ roles in the sentence, such as agent or action, to identify brain activation patterns in English.
With 67 percent accuracy, the model predicted which sentences were read in Portuguese. The resulting brain images showed that the activation patterns for the 60 sentences were in the same brain locations and at similar intensity levels for both English and Portuguese sentences.
Cross-language Prediction Model
Additionally, the results revealed the activation patterns could be grouped into four semantic categories, depending on the sentence’s focus: people, places, actions, and feelings. The groupings were very similar across languages, reinforcing the organization of information in the brain is the same regardless of the language in which it is expressed.
“The cross-language prediction model captured the conceptual gist of the described event or state in the sentences, rather than depending on particular language idiosyncrasies. It demonstrated a meta-language prediction capability from neural signals across people, languages, and bilingual status,”
says Ying Yang, a postdoctoral associate in psychology and first author of the study.
The research was funded by the Office of the Director of National Intelligence and the Intelligence Advanced Research Projects Activity via the US Air Force Research Laboratory.
Ying Yang, et al
Commonality of neural representations of sentences across languages: Predicting brain activation during Portuguese sentence comprehension using an English-based model of brain function
NeuroImage; DOI: 10.1016/j.neuroimage.2016.10.029
Image: José Carlos Casimiro/Flickr