Shared Acoustic Codes Underlie Emotional Communication in Music and Speech - Evidence from Deep Transfer Learning (Datasets)



Coutinho, E ORCID: 0000-0001-5234-1497
(2017) Shared Acoustic Codes Underlie Emotional Communication in Music and Speech - Evidence from Deep Transfer Learning (Datasets). Zenodo.

[img] Text
Shared Acoustic Codes Underlie Emotional Communication in Music and Speech.docx - Published version

Download (24kB)

Abstract

This repository contains the datasets used in the article "Shared Acoustic Codes Underlie Emotional Communication in Music and Speech - Evidence from Deep Transfer Learning" (Coutinho & Schuller, 2017). In that article four different data sets were used: SEMAINE, RECOLA, ME14 and MP (acronyms and datasets described below). The SEMAINE (speech) and ME14 (music) corpora were used for the unsupervised training of the Denoising Auto-encoders (domain adaptation stage) - only the audio features extracted from the audio files in these corpora were used and are provided in this repository. The RECOLA (speech) and MP (music) corpora were used for the supervised training phase - both the audio features extracted from the audio files and the Arousal and Valence annotations were used. In this repository, we provide the audio features extracted from the audio files for both corpora, and Arousal and Valence annotations for some of the music datasets (those that the author of this repository is the data curator).

Item Type: Other
Uncontrolled Keywords: music, emotion, arousal, valence, time-continuous, dataset
Depositing User: Symplectic Admin
Date Deposited: 29 Jan 2020 15:37
Last Modified: 19 Jan 2023 00:06
DOI: 10.5281/zenodo.600657
Related URLs:
URI: https://livrepository.liverpool.ac.uk/id/eprint/3072453