Researchers in Finland predicted with significant accuracy if people were musicians or not, by applying methods of computational music analysis and machine learning, on brain imaging data collected during music listening.
The results underline the striking impact of musical training on our neural responses to music to the extent of discriminating musicians’ brains from non-musicians’ brains despite other independent factors such as musical preference and familiarity.
The research also showed that the brain areas that best predict musicianship exist predominantly in the frontal and temporal areas of the brain’s right hemisphere. These findings conform to previous work on how the brain processes certain acoustic characteristics of music as well as intonation in speech.
The study used functional magnetic resonance imaging (fMRI) brain data collected by Professor Elvira Brattico’s team at Aarhus University. The data was collected from 18 musicians and 18 non-musicians while they attentively listened to music of different genres. Computational algorithms were applied to extract musical features from the presented music.
“A novel feature of our approach was that, instead of relying on static representations of brain activity, we modelled how music is processed in the brain over time. Taking the temporal dynamics into account was found to improve the results remarkably,”
explained Pasi Saari, Postdoctoral Researcher at the University of Jyväskylä and the main author of the study.
As the last step of modelling, the researchers used machine learning to form a model that predicts musicianship from a combination of brain regions.
The machine learning model was able to predict the listeners’ musicianship with 77 % accuracy, a result that is on a par with similar studies on participant classification with, for example, clinical populations of brain-damaged patients.
The areas where music processing best predicted musicianship resided mostly in the right hemisphere, and included areas previously found to be associated with engagement and attention, processing of musical conventions, and processing of music-related sound features (e.g. pitch and tonality).
“These areas can be regarded as core structures in music processing which are most affected by intensive, lifelong musical training,”
stated Iballa Burunat, Postdoctoral Researcher at the University of Jyväskylä and a co-author of the study.
In these areas, the processing of higher-level features such as tonality and pulse was the best predictor of musicianship, suggesting that musical training affects particularly the processing of these aspects of music.
The research was funded by the Academy of Finland and Danish National Research Foundation.
Pasi Saari, Iballa Burunat, Elvira Brattico & Petri Toiviainen
Decoding Musical Training from Dynamic Processing of Musical Features in the Brain
Scientific Reports 8, Article number: 708 (2018) doi:10.1038/s41598-018-19177-5
Top Image: Red: left/right anterior cingulate gyrus; Green: right inferior frontal gyrus; Blue: right superior temporal gyrus; Gray: caudate nucleus, middle frontal gyrus, inferior frontal gyrus. Image courtesy of University of Jyväskylä