Radboud Repository

      View Item 
      •   Radboud Repository
      • Collections Radboud University
      • Datasets
      • View Item
      •   Radboud Repository
      • Collections Radboud University
      • Datasets
      • View Item
      JavaScript is disabled for your browser. Some features of this site may not work without it.
      BrowseAll of RepositoryCollectionsDepartmentsDate IssuedAuthorsTitlesDocument typeThis CollectionDepartmentsDate IssuedAuthorsTitlesDocument type
      StatisticsView Item Statistics

      Audiovisual speech perception in a predictive framework

      Find Full text
      Creators
      Todorovic, A.
      Lange, F.P. de
      Date of Archiving
      2019
      Archive
      Radboud Data Repository
      Data archive handle
      https://hdl.handle.net/11633/aabpzmwz
      Publication type
      Dataset
      Access level
      Restricted access
      Please use this identifier to cite or link to this item: https://hdl.handle.net/2066/204132   https://hdl.handle.net/2066/204132
      Display more detailsDisplay less details
      Organization
      PI Group Predictive Brain
      SW OZ DCC CO
      Audience(s)
      Life sciences
      Languages used
      English
      Key words
      prediction
      Abstract
      In language comprehension, a variety of contextual cues act in unison to render upcoming words more or less predictable. As a sentence unfolds, we use prior context (sentential constraints) to predict what the next words might be. Additionally, in a conversation, we can predict upcoming sounds through observing the mouth movements of a speaker (visual constraints). In electrophysiological studies, effects of visual constraints have typically been observed early in language processing, while effects of sentential constraints have typically been observed later. We hypothesized that the visual and the sentential constraints might feed into the same predictive process such that effects of sentential constraints might also be detectable early in language processing through modulations of the early effects of visual salience. We presented participants with audiovisual speech while recording their brain activity with magnetoencephalography. Participants saw videos of a person saying sentences where the last word was either sententially constrained or not, and began with a salient or non-salient mouth movement. We found that sentential constraints indeed exerted an early (N1) influence on language processing. Sentential modulations of the N1 visual predictability effect were visible in brain areas associated with semantic processing, and were differently expressed in the two hemispheres. In the left hemisphere, visual and sentential constraints jointly suppressed the auditory evoked field, while the right hemisphere was sensitive to visual constraints only in the absence of strong sentential constraints. These results suggest that sentential and visual constraints can jointly influence even very early stages of audiovisual speech comprehension.
      This item appears in the following Collection(s)
      • Datasets [1490]
      • Donders Centre for Cognitive Neuroimaging [3665]
      • Faculty of Social Sciences [28734]
       
      •  Upload Full Text
      •  Terms of Use
      •  Notice and Takedown
      Bookmark and Share
      Admin login