Neural decoding with hierarchical generative models
Publication year
2010Source
Neural Computation, 22, 12, (2010), pp. 3127-3142ISSN
Publication type
Article / Letter to editor
Related datasets
Display more detailsDisplay less details
Organization
SW OZ DCC AI
Journal title
Neural Computation
Volume
vol. 22
Issue
iss. 12
Languages used
English (eng)
Page start
p. 3127
Page end
p. 3142
Subject
Cognitive artificial intelligence; DI-BCB_DCC_Theme 4: Brain Networks and Neuronal CommunicationAbstract
Recent research has shown that reconstruction of perceived images based on hemodynamic response as measured with functional magnetic resonance imaging (fMRI) is starting to become feasible. In this letter, we explore reconstruction based on a learned hierarchy of features by employing a hierarchical generative model that consists of conditional restricted Boltzmann machines. In an unsupervised phase, we learn a hierarchy of features from data, and in a supervised phase, we learn how brain activity predicts the states of those features. Reconstruction is achieved by sampling from the model, conditioned on brain activity. We show that by using the hierarchical generative model, we can obtain good-quality reconstructions of visual images of handwritten digits presented during an fMRI scanning session.
This item appears in the following Collection(s)
- Academic publications [246625]
- Electronic publications [134196]
- Faculty of Social Sciences [30504]
- Open Access publications [107722]
Upload full text
Use your RU credentials (u/z-number and password) to log in with SURFconext to upload a file for processing by the repository team.