Our brains are brimming with maps. Visual maps of our external environments, as well as motor maps that define how we interact physically within those environments.
Somehow these separate points of reference need to correspond with, and to, one another in order for us to act, whether it’s grasping a coffee cup or hitting a tennis ball.
How that happens is the focus of a new study in which researchers used neuroimaging to decode how the brain transforms sensory input into action.
A doctoral student in dynamical neuroscience at University of California, Santa Barbara, lead author Deborah Barany and colleagues measured brain activity as participants made wrist movements in different directions – left and right, or up and down – in one of two postures while in an MRI scanner.
“Trying to record the movements performed in the scanner was the first major challenge of the experiment,” Barany says.
To accomplish this, she implemented a sophisticated setup in which cameras recorded LED lights placed on the right hand to track very fine movements during brain scanning.
“We were ultimately able to gather a rich set of movement data to relate to brain activity.”
The researchers used an analysis technique that allowed them to parse out the specific patterns of brain activity corresponding to different aspects of movement.
Superior Parietal Lobe
Building on previous research, they found brain areas containing maps for the spatial location where a movement was to go as well as the actual movement direction. One of these areas, the superior parietal lobule, contained information for both location and direction, suggesting that this area helps facilitate the transformation process.
An unexpected finding was that these movement-related maps were highly sensitive to the posture of the hand.
“It was surprising to see representations of posture throughout the motor system,” says Barany. “This may mean that posture-dependent planning is more widespread in the brain than previously thought.”
Scott Grafton, a professor in the department of psychological and brain science and director of the campus’s Action Lab, where Barany is a member, said:
“Even if you can distinguish brain activity for two actions, you still don’t know if that is because of differences in where things are in the world or differences in the way the muscles are being used. What Deborah figured out how to do is overcome that uncertainty using state-of-the-art decoding methods.”
This work helps to explain the deficits of patients with neurological damage affecting the processing of visuomotor transformations. The researchers’ findings may guide the development of novel therapies to treat patients with optic ataxia (an inability to accurately reach to objects) and ideomotor apraxia (a deficit in imitating gestures under command).
Deborah A. Barany, Valeria Della-Maggiore, Shivakumar Viswanathan, Matthew Cieslak, and Scott T. Grafton
Feature Interactions Enable Decoding of Sensorimotor Transformations for Goal-Directed Movement
The Journal of Neuroscience, 14 May 2014, 34(20): 6860-6873; doi: 10.1523/JNEUROSCI.5173-13.2014