Electrocorticography reveals continuous auditory and visual speech tracking in temporal and occipital cortex
until further notice
Number of pages
SourceEuropean Journal of Neuroscience, 51, 5, (2020), pp. 1364-1376
Article / Letter to editor
Display more detailsDisplay less details
SW OZ DCC AI
European Journal of Neuroscience
Subject110 000 Neurocognition of Language; Cognitive artificial intelligence
During natural speech perception, humans must parse temporally continuous auditory and visual speech signals into sequences of words. However, most studies of speech perception present only single words or syllables. We used electrocorticography (subdural electrodes implanted on the brains of epileptic patients) to investigate the neural mechanisms for processing continuous audiovisual speech signals consisting of individual sentences. Using partial correlation analysis, we found that posterior superior temporal gyrus (pSTG) and medial occipital cortex tracked both the auditory and visual speech envelopes. These same regions, as well as inferior temporal cortex, responded more strongly to a dynamic video of a talking face compared to auditory speech paired with a static face. Occipital cortex and pSTG carry temporal information about both auditory and visual speech dynamics. Visual speech tracking in pSTG may be a mechanism for enhancing perception of degraded auditory speech. This article is protected by copyright. All rights reserved.
Upload full text
Use your RU credentials (u/z-number and password) to log in with SURFconext to upload a file for processing by the repository team.