Flexible reference frames for grasp planning in human parietofrontal cortex
Number of pages
SourceeNeuro, 2, 3, (2015), article e0008-15.2015
Article / Letter to editor
Display more detailsDisplay less details
SW OZ DCC CO
SubjectAction, intention, and motor control; DI-BCB_DCC_Theme 2: Perception, Action and Control
Reaching to a location in space is supported by a cortical network that operates in a variety of reference frames. Computational models and recent fMRI evidence suggest that this diversity originates from neuronal populations dynamically shifting between reference frames as a function of task demands and sensory modality. In this human fMRI study, we extend this framework to nonmanipulative grasping movements, an action that depends on multiple properties of a target, not only its spatial location. By presenting targets visually or somaesthetically, and by manipulating gaze direction, we investigate how information about a target is encoded in gaze- and body-centered reference frames in dorsomedial and dorsolateral grasping-related circuits. Data were analyzed using a novel multivariate approach that combines classification and cross-classification measures to explicitly aggregate evidence in favor of and against the presence of gaze- and body-centered reference frames. We used this approach to determine whether reference frames are differentially recruited depending on the availability of sensory information, and where in the cortical networks there is common coding across modalities. Only in the left anterior intraparietal sulcus (aIPS) was coding of the grasping target modality dependent: predominantly gaze-centered for visual targets and body-centered for somaesthetic targets. Left superior parieto-occipital cortex consistently coded targets for grasping in a gaze-centered reference frame. Left anterior precuneus and premotor areas operated in a modality-independent, body-centered frame. These findings reveal how dorsolateral grasping area aIPS could play a role in the transition between modality-independent gaze-centered spatial maps and body-centered motor areas.
Upload full text
Use your RU credentials (u/z-number and password) tolog in with SURFconextto upload a file for processing by the repository team.