Lateralized electrical brain activity reveals covert attention allocation during speaking

Fulltext:
162252.pdf
Embargo:
until further notice
Size:
819.0Kb
Format:
PDF
Description:
Publisher’s version
Source
Neuropsychologia, 95, (2017), pp. 101-110ISSN
Publication type
Article / Letter to editor

Display more detailsDisplay less details
Organization
SW OZ DCC PL
Neurology
Journal title
Neuropsychologia
Volume
vol. 95
Languages used
English (eng)
Page start
p. 101
Page end
p. 110
Subject
110 000 Neurocognition of Language; DI-BCB_DCC_Theme 1: Language and Communication; Psycholinguistics; Radboudumc 3: Disorders of movement DCMN: Donders Center for Medical NeuroscienceAbstract
Speakers usually begin to speak while only part of the utterance has been planned. Earlier work has shown that speech planning processes are reflected in speakers' eye movements as they describe visually presented objects. However, to-be-named objects can be processed to some extent before they have been fixated upon, presumably because attention can be allocated to objects covertly, without moving the eyes. The present study investigated whether EEG could track speakers' covert attention allocation as they produced short utterances to describe pairs of objects (e.g., "dog and chair"). The processing difficulty of each object was varied by presenting it in upright orientation (easy) or in upside down orientation (difficult). Background squares flickered at different frequencies in order to elicit steady-state visual evoked potentials (SSVEPs). The N2pc component, associated with the focusing of attention on an item, was detectable not only prior to speech onset, but also during speaking. The time course of the N2pc showed that attention shifted to each object in the order of mention prior to speech onset. Furthermore, greater processing difficulty increased the time speakers spent attending to each object. This demonstrates that the N2pc can track covert attention allocation in a naming task. In addition, an effect of processing difficulty at around 200-350 ms after stimulus onset revealed early attention allocation to the second to-be-named object. The flickering backgrounds elicited SSVEPs, but SSVEP amplitude was not influenced by processing difficulty. These results help complete the picture of the coordination of visual information uptake and motor output during speaking.
This item appears in the following Collection(s)
- Academic publications [234289]
- Electronic publications [117251]
- Faculty of Medical Sciences [89180]
- Faculty of Social Sciences [29192]
- Open Access publications [84265]
Upload full text
Use your RU credentials (u/z-number and password) to log in with SURFconext to upload a file for processing by the repository team.