Cooperative Learning and Its Application to Emotion Recognition From Speech



Zhang, Z, Coutinho, E ORCID: 0000-0001-5234-1497, Deng, J and Schuller, B
(2015) Cooperative Learning and Its Application to Emotion Recognition From Speech. IEEE/ACM Transactions on Audio, Speech and Language Processing, 23 (1). pp. 115-126.

This is the latest version of this item.

[img] Text
T-ASL-04553-2014.FINAL(single column).pdf - Unspecified
Available under License : See the attached licence file.

Download (434kB)

Abstract

In this paper, we propose a novel method for highly efficient exploitation of unlabeled data-Cooperative Learning. Our approach consists of combining Active Learning and Semi-Supervised Learning techniques, with the aim of reducing the costly effects of human annotation. The core underlying idea of Cooperative Learning is to share the labeling work between human and machine efficiently in such a way that instances predicted with insufficient confidence value are subject to human labeling, and those with high confidence values are machine labeled. We conducted various test runs on two emotion recognition tasks with a variable number of initial supervised training instances and two different feature sets. The results show that Cooperative Learning consistently outperforms individual Active and Semi-Supervised Learning techniques in all test cases. In particular, we show that our method based on the combination of Active Learning and Co-Training leads to the same performance of a model trained on the whole training set, but using 75% fewer labeled instances. Therefore, our method efficiently and robustly reduces the need for human annotations.

Item Type: Article
Uncontrolled Keywords: Acoustics, active learning, cooperative learning, emotion recognition, multi-view learning, semi-supervised learning, supervised learning
Depositing User: Symplectic Admin
Date Deposited: 02 Mar 2018 09:47
Last Modified: 19 Jan 2023 06:58
DOI: 10.1109/TASLP.2014.2375558
Related URLs:
URI: https://livrepository.liverpool.ac.uk/id/eprint/3008801

Available Versions of this Item