The Role of Synchrony and Ambiguity in Speech-Gesture Integration during Comprehension
Publication year
2011Number of pages
10 p.
Source
Journal of Cognitive Neuroscience, 23, 8, (2011), pp. 1845-1854ISSN
Publication type
Article / Letter to editor

Display more detailsDisplay less details
Organization
SW OZ DCC BO
SW OZ DCC PL
Journal title
Journal of Cognitive Neuroscience
Volume
vol. 23
Issue
iss. 8
Languages used
English (eng)
Page start
p. 1845
Page end
p. 1854
Subject
110 000 Neurocognition of Language; Communicative Competences; DI-BCB_DCC_Theme 1: Language and Communication; DI-BCB_DCC_Theme 2: Perception, Action and Control; PsycholinguisticsAbstract
During face-to-face communication, one does not only hear speech but also see a speaker's communicative hand movements. It has been shown that such hand gestures play an important role in communication where the two modalities influence each other's interpretation. A gesture typically temporally overlaps with coexpressive speech, but the gesture is often initiated before (but not after) the coexpressive speech. The present ERP study investigated what degree of asynchrony in the speech and gesture onsets are optimal for semantic integration of the concurrent gesture and speech. Videos of a person gesturing were combined with speech segments that were either semantically congruent or incongruent with the gesture. Although gesture and speech always overlapped in time, gesture and speech were presented with three different degrees of asynchrony. In the SOA 0 condition, the gesture onset and the speech onset were simultaneous. In the SOA 160 and 360 conditions, speech was delayed by 160 and 360 msec, respectively. ERPs time locked to speech onset showed a significant difference between semantically congruent versus incongruent gesture-speech combinations on the N400 for the SOA 0 and 160 conditions. No significant difference was found for the SOA 360 condition. These results imply that speech and gesture are integrated most efficiently when the differences in onsets do not exceed a certain time span because of the fact that iconic gestures need speech to be disambiguated in a way relevant to the speech context.
This item appears in the following Collection(s)
- Academic publications [234419]
- Electronic publications [117392]
- Faculty of Social Sciences [29219]
- Open Access publications [84338]
Upload full text
Use your RU credentials (u/z-number and password) to log in with SURFconext to upload a file for processing by the repository team.