Publication year
2009Publisher
Brighton : ISCA
In
Proceedings of Interspeech 2009, pp. 2027-2030Related links
Annotation
Interspeech
Publication type
Article in monograph or in proceedings

Display more detailsDisplay less details
Organization
Taalwetenschap
Languages used
English (eng)
Book title
Proceedings of Interspeech 2009
Page start
p. 2027
Page end
p. 2030
Subject
Linguistic Information ProcessingAbstract
In this paper, we describe emotion recognition experiments car- ried out for spontaneous affective speech with the aim to com- pare the added value of annotation of felt emotion versus an- notation of perceived emotion. Using speech material avail- able in the TNO-GAMING corpus (a corpus containing audio- visual recordings of people playing videogames), speech-based affect recognizers were developed that can predict Arousal and Valence scalar values. Two types of recognizers were devel- oped in parallel: one trained with felt emotion annotations (generated by the gamers themselves) and one trained with perceived/observed emotion annotations (generated by a group of observers). The experiments showed that, in speech, with the methods and features currently used, observed emotions are easier to predict than felt emotions. The results suggest that recognition performance strongly depends on how and by whom the emotion annotations are carried out
This item appears in the following Collection(s)
- Academic publications [229037]
- Electronic publications [111444]
- Faculty of Arts [28795]
- Open Access publications [80291]
Upload full text
Use your RU credentials (u/z-number and password) to log in with SURFconext to upload a file for processing by the repository team.