Manipulating word awareness dissociates feed-forward from feedback models of language-perception interactions
Publication year
2015Source
Neuroscience of Consciousness, 1, (2015), article niv003ISSN
Publication type
Article / Letter to editor

Display more detailsDisplay less details
Organization
PI Group Neurobiology of Language
PI Group Predictive Brain
Donders Centre for Cognitive Neuroimaging
Journal title
Neuroscience of Consciousness
Volume
vol. 1
Languages used
English (eng)
Subject
110 000 Neurocognition of Language; 180 000 Predictive BrainAbstract
Previous studies suggest that linguistic material can modulate visual perception, but it is unclear at which level of processing these interactions occur. Here we aim to dissociate between two competing models of language–perception interactions: a feed-forward and a feedback model. We capitalized on the fact that the models make different predictions on the role of feedback. We presented unmasked (aware) or masked (unaware) words implying motion (e.g. “rise,” “fall”), directly preceding an upward or downward visual motion stimulus. Crucially, masking leaves intact feed-forward information processing from low- to high-level regions, whereas it abolishes subsequent feedback. Under this condition, participants remained faster and more accurate when the direction implied by the motion word was congruent with the direction of the visual motion stimulus. This suggests that language–perception interactions are driven by the feed-forward convergence of linguistic and perceptual information at higher-level conceptual and decision stages.
This item appears in the following Collection(s)
- Academic publications [229222]
- Donders Centre for Cognitive Neuroimaging [3665]
- Electronic publications [111663]
- Open Access publications [80464]
Upload full text
Use your RU credentials (u/z-number and password) to log in with SURFconext to upload a file for processing by the repository team.