Modality-independent decoding of semantic information from the human brain
Number of pages
SourceCerebral Cortex, 24, 2, (2014), pp. 426-434
Article / Letter to editor
Display more detailsDisplay less details
SW OZ DCC CO
SW OZ DCC PL
SW OZ DCC AI
SubjectCognitive artificial intelligence; DI-BCB_DCC_Theme 1: Language and Communication; DI-BCB_DCC_Theme 4: Brain Networks and Neuronal Communication; Psycholinguistics
An ability to decode semantic information from fMRI spatial patterns has been demonstrated in previous studies mostly for 1 specific input modality. In this study, we aimed to decode semantic category independent of the modality in which an object was presented. Using a searchlight method, we were able to predict the stimulus category from the data while participants performed a semantic categorization task with 4 stimulus modalities (spoken and written names, photographs, and natural sounds). Significant classification performance was achieved in all 4 modalities. Modality-independent decoding was implemented by training and testing the searchlight method across modalities. This allowed the localization of those brain regions, which correctly discriminated between the categories, independent of stimulus modality. The analysis revealed large clusters of voxels in the left inferior temporal cortex and in frontal regions. These voxels also allowed category discrimination in a free recall session where subjects recalled the objects in the absence of external stimuli. The results show that semantic information can be decoded from the fMRI signal independently of the input modality and have clear implications for understanding the functional mechanisms of semantic memory.
Upload full text
Use your RU credentials (u/z-number and password) to log in with SURFconext to upload a file for processing by the repository team.