Coutinho, Eduardo and Dibben, Nicola
(2013)
Psychoacoustic cues to emotion in speech prosody and music.
COGNITION & EMOTION, 27 (4).
pp. 658-684.
ISSN 0269-9931, 1464-0600
Text
CoutinhoDibben2012_Submission3_Accepted4Publication.pdf - Author Accepted Manuscript Download (751kB) |
Abstract
There is strong evidence of shared acoustic profiles common to the expression of emotions in music and speech, yet relatively limited understanding of the specific psychoacoustic features involved. This study combined a controlled experiment and computational modelling to investigate the perceptual codes associated with the expression of emotion in the acoustic domain. The empirical stage of the study provided continuous human ratings of emotions perceived in excerpts of film music and natural speech samples. The computational stage created a computer model that retrieves the relevant information from the acoustic stimuli and makes predictions about the emotional expressiveness of speech and music close to the responses of human subjects. We show that a significant part of the listeners' second-by-second reported emotions to music and speech prosody can be predicted from a set of seven psychoacoustic features: loudness, tempo/speech rate, melody/prosody contour, spectral centroid, spectral flux, sharpness, and roughness. The implications of these results are discussed in the context of cross-modal similarities in the communication of emotion in the acoustic domain.
Item Type: | Article |
---|---|
Uncontrolled Keywords: | Emotion, Arousal and valence, Music, Speech prosody, Psychoacoustics, Neural networks |
Depositing User: | Symplectic Admin |
Date Deposited: | 11 Aug 2016 11:00 |
Last Modified: | 07 Dec 2024 15:34 |
DOI: | 10.1080/02699931.2012.732559 |
Related URLs: | |
URI: | https://livrepository.liverpool.ac.uk/id/eprint/3002878 |