Visual context enhanced. The joint contribution of iconic gestures and visible speech to degraded speech comprehension.
Source
Journal of Speech, Language, and Hearing Research, 60, 1, (2017), pp. 212-222ISSN
Publication type
Article / Letter to editor
Display more detailsDisplay less details
Organization
PI Group Neurobiology of Language
Taalwetenschap
SW OZ DCC PL
Journal title
Journal of Speech, Language, and Hearing Research
Volume
vol. 60
Issue
iss. 1
Languages used
English (eng)
Page start
p. 212
Page end
p. 222
Subject
160 000 Neuronal Oscillations; DI-BCB_DCC_Theme 1: Language and Communication; Giving cognition a hand: Linking spatial cognition to linguistic expression in native and late signers and bimodal bilinguals; Giving speech a hand: How functional brain networks support gestureal enhancement of language; Language & Communication; Language in Mind; Language in our hands: Acquisition of spatial language in deaf and hearing children; Multimodal language and communication; Psycholinguistics; Language in Interaction; niet-RU-publicatiesAbstract
Purpose: This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech comprehension have only been performed separately. Method: Twenty participants watched videos of an actress uttering an action verb and completed a free-recall task. The videos were presented in 3 speech conditions (2-band noise-vocoding, 6-band noise-vocoding, clear), 3 multimodal conditions (speech + lips blurred, speech + visible speech, speech + visible speech + gesture), and 2 visual-only conditions (visible speech, visible speech + gesture). Results: Accuracy levels were higher when both visual articulators were present compared with 1 or none. The enhancement effects of (a) visible speech, (b) gestural information on top of visible speech, and (c) both visible speech and iconic gestures were larger in 6-band than 2-band noise-vocoding or visual-only conditions. Gestural enhancement in 2-band noise-vocoding did not differ from gestural enhancement in visual-only conditions. Conclusions: When perceiving degraded speech in a visual context, listeners benefit more from having both visual articulators present compared with 1. This benefit was larger at 6-band than 2-band noise-vocoding, where listeners can benefit from both phonological cues from visible speech and semantic cues from iconic gestures to disambiguate speech.
Subsidient
NWO (Grant code:info:eu-repo/grantAgreement/NWO/Gravitation/024.001.006)
This item appears in the following Collection(s)
- Academic publications [246515]
- Donders Centre for Cognitive Neuroimaging [4040]
- Electronic publications [134102]
- Faculty of Arts [30004]
- Faculty of Social Sciences [30494]
- Open Access publications [107633]
Upload full text
Use your RU credentials (u/z-number and password) to log in with SURFconext to upload a file for processing by the repository team.