Learning co-speech gesture representations in dialogue through contrastive learning: An intrinsic evaluation
Fulltext:
311608.pdf
Embargo:
until 2025-05-12
Size:
1.525Mb
Format:
PDF
Description:
Publisher’s version
Publication year
2024Author(s)
Publisher
New York,, NY : Association for Computing Machinery (ACM)
ISBN
9798400704628
In
Proceedings of the 26th International Conference on Multimodal Interaction, pp. 274-283Annotation
ICMI '24: The 26th International Conference on Multimodal Interaction (San Jose, Costa Rica, November 4-8, 2024)
Publication type
Article in monograph or in proceedings
Display more detailsDisplay less details
Organization
SW OZ DCC PL
Languages used
English (eng)
Book title
Proceedings of the 26th International Conference on Multimodal Interaction
Page start
p. 274
Page end
p. 283
Subject
PsycholinguisticsAbstract
In face-to-face dialogues, the form-meaning relationship of co-speech gestures varies depending on contextual factors such as what the gestures refer to and the individual characteristics of speakers. These factors make co-speech gesture representation learning challenging. How can we learn meaningful gestures representations considering gestures’ variability and relationship with speech? This paper tackles this challenge by employing self-supervised contrastive learning techniques to learn gesture representations from skeletal and speech information. We propose an approach that includes both unimodal and multimodal pre-training to ground gesture representations in co-occurring speech. For training, we utilize a face-to-face dialogue dataset rich with representational iconic gestures. We conduct thorough intrinsic evaluations of the learned representations through comparison with human-annotated pairwise gesture similarity. Moreover, we perform a diagnostic probing analysis to assess the possibility of recovering interpretable gesture features from the learned representations. Our results show a significant positive correlation with human-annotated gesture similarity and reveal that the similarity between the learned representations is consistent with well-motivated patterns related to the dynamics of dialogue interaction. Moreover, our findings demonstrate that several features concerning the form of gestures can be recovered from the latent representations. Overall, this study shows that multimodal contrastive learning is a promising approach for learning gesture representations, which opens the door to using such representations in larger-scale gesture analysis studies.
This item appears in the following Collection(s)
- Academic publications [246860]
- Electronic publications [134292]
- Faculty of Social Sciences [30549]
Upload full text
Use your RU credentials (u/z-number and password) to log in with SURFconext to upload a file for processing by the repository team.