Do individual differences influence moment-by-moment reports of emotion perceived in music and speech prosody?



Dibben, Nicola, Coutinho, E ORCID: 0000-0001-5234-1497, A. Vilar, José and Estévez-Pérez, Graciela
(2018) Do individual differences influence moment-by-moment reports of emotion perceived in music and speech prosody? Frontiers in Behavioral Neuroscience, 12.

This is the latest version of this item.

[img] Text
fnbeh-12-00184.pdf - OA Published Version

Download (1MB)

Abstract

Comparison of emotion perception in music and prosody has the potential to contribute to an understanding of their speculated shared evolutionary origin. Previous research suggests shared sensitivity to and processing of music and speech, but less is known about how emotion perception in the auditory domain might be influenced by individual differences. Personality, emotional intelligence, gender, musical training and age exert some influence on discrete, summative judgments of perceived emotion in music and speech stimuli. However, music and speech are temporal phenomena, and little is known about whether individual differences influence moment-by-moment perception of emotion in these domains. A behavioral study collected two main types of data: continuous ratings of perceived emotion while listening to extracts of music and speech, using a computer interface which modeled emotion on two dimensions (arousal and valence), and demographic information including measures of personality (TIPI) and emotional intelligence (TEIQue-SF). Functional analysis of variance on the time series data revealed a small number of statistically significant differences associated with Emotional Stability, Agreeableness, musical training and age. The results indicate that individual differences exert limited influence on continuous judgments of dynamic, naturalistic expressions. We suggest that this reflects a reliance on acoustic cues to emotion in moment-by-moment judgments of perceived emotions and is further evidence of the shared sensitivity to and processing of music and speech.

Item Type: Article
Depositing User: Symplectic Admin
Date Deposited: 10 Sep 2018 15:37
Last Modified: 01 Oct 2021 12:50
DOI: 10.3389/fnbeh.2018.00184
Related URLs:
URI: https://livrepository.liverpool.ac.uk/id/eprint/3026051

Available Versions of this Item