Predicting the similarity between expressive performances of music from measurements of tempo and dynamics
Number of pages
SourceThe Journal of the Acoustical Society of America, 117, 1, (2005), pp. 391-399
Article / Letter to editor
Display more detailsDisplay less details
SW OZ DCC KI
SW OZ NICI CO
The Journal of the Acoustical Society of America
SubjectCognitive artificial intelligence
Measurements of tempo and dynamics from audio files or MIDI data are frequently, used to get insight into a performer's contribution to music. The measured variations in tempo and dynamics are often represented in different formats by different authors. Few systematic comparisons have been made between these representations. Moreover, it is unknown what data representation comes closest to subjective perception. The reported study tests the perceptual validity of existing data representations by comparing their ability to explain the subjective similarity between pairs of performances. In two experiments, 40 participants rated the similarity between performances of a Chopin prelude and a Mozart sonata. Models based on different representations of the tempo and dynamics of the performances were fitted to these similarity ratings. The results favor other data representations of performances than generally used, and imply that comparisons between performances are made perceptually in a different way than often assumed. For example, the best fit was obtained with models based on absolute tempo and absolute tempo times loudness, while conventional model's based on normalized variations, or on correlations between tempo profiles and loudness profiles, did not explain the similarity ratings well.
Upload full text
Use your RU credentials (u/z-number and password) to log in with SURFconext to upload a file for processing by the repository team.