Modelling long- and short-term structure in symbolic music with attention and recurrence



de Berardinis, Jacopo, Barrett, Samuel, Cangelosi, Angelo and Coutinho, Eduardo ORCID: 0000-0001-5234-1497
(2020) Modelling long- and short-term structure in symbolic music with attention and recurrence. In: The 2020 Joint Conference on AI Music Creativity, 2020-10-19 - 2020-10-23, Stockholm, Sweden.

[img] Text
CSMC__MuMe_2020_paper_46.pdf - Published version

Download (1MB) | Preview

Abstract

The automatic composition of music with long-term structure is a central problem in music generation. Neural network-based models have been shown to perform relatively well in melody generation, but generating music with long-term structure is still a major challenge. This paper introduces a new approach for music modelling that combines recent advancements of transformer models with recurrent networks – the long-short term universal transformer (LSTUT), and compare its ability to predict music against current state-of-the-art music models. Our experiments are designed to push the boundaries of music models on considerably long music sequences – a crucial requirement for learning long-term structure effectively. Results show that the LSTUT outperforms all the other models and can potentially learn features related to music structure at different time scales. Overall, we show the importance of integrating both recurrence and attention in the architecture of music models, and their potential use in future automatic composition systems.

Item Type: Conference or Workshop Item (Unspecified)
Additional Information: Video presentation: https://youtu.be/Bj4RAaFqqLo.
Uncontrolled Keywords: Music modelling, Predictive models for music, Long-term music structure, Long-short term memory, Transformers
Depositing User: Symplectic Admin
Date Deposited: 26 Oct 2020 10:05
Last Modified: 18 Jan 2023 23:26
URI: https://livrepository.liverpool.ac.uk/id/eprint/3105086