Music and images are both key components to provide an emotional impact when we watch videos. The nature of the elements that contribute to making music an emotional stimulus is not easily identifiable. This is because emotions are highly subjective. In this thesis we propose a method to compose music using emotions as an input and analyse which elements can contribute to making a piece more emotional. There are certain measurable elements in humans that can carry information about the mood at a given moment, such as brain signals. Our work therefore proposes a method that, starting from these signals, first infers the emotion felt using a Machine Learning algorithm. The retrieved emotion is then used to compose a piece of music that expresses a certain emotion, thanks to a Transformer algorithm. In the system we will also use musical features whose values have a variation linked to the variation of the emotions. Brain signal are traduced using EEG system on subject stimulated by videos. The music generated starting by the acquired signals is used as soundtrack for the selected video. In this way the subject become the composer of the soundtrack related to the video is watching, through his/her brainwaves. This study aims to add research elements to the study of emotions in the performance and generation of musical pieces. We have chosen to carry out the acquisition of EEG data using Muse headbands, in collaboration with Fuse*Factory, who have furthered their use during the study for their artistic performances.
Musica e immagini sono componenti fondamentale per l’impatto emotivo durante la visione di un video. La natura degli elementi che contribuiscono a rendere la musica uno stimolo emotivo non è facilmente identificabile. Questo perché le emozioni sono fortemente soggettive. In questa tesi proproniamo un metodo per comporre musica utilizzando le emozioni come input, andando ad analizzare quali elementi possono contribuire a rendere un pezzo più emotivo. Ci sono alcuni elementi misurabili nell’uomo che possono portare informazioni riguardo al mood in un dato istante, come ad esempio i segnali celebrali. Il nostro lavoro propone quindi un metodo che, a partire da questi segnali, suggerisce l'emozione tramite un algoritmo di Machine Learning. Questa viene poi usata per comporre un brano musicale, che esprima una certa emozione, grazie a un algoritmo di Transformer. Nel sistema useremo anche delle features musicali i cui valori hanno una variazione che è legata alla variazione delle emozioni. I segnali celebrali sono tradotti usando un sistema di EEG su soggetti stimolati da video. La musica generata a partire dai segnali acquisiti è usata come soundtrack per altri video. In questo modo il soggetto diventa il compositore della colonna sonora realtiva al video che sta vedendo, tramite le sue onde celebrali. Questo studio si propone di aggiungere elementi di ricerca allo studio delle emozioni nell’esecuzione e nella generazione di brani musicali. Abbiamo scelto di effettuare l’acquisione dei dati di EEG tramite caschetti Muse, in collaborazione con Fuse*Factory, che ne ha approfondito l’uso durante lo studio per le loro performance artistiche.
EEG data driven automatic soundtrack composition based on emotion recognition
Marson, Cecilia
2020/2021
Abstract
Music and images are both key components to provide an emotional impact when we watch videos. The nature of the elements that contribute to making music an emotional stimulus is not easily identifiable. This is because emotions are highly subjective. In this thesis we propose a method to compose music using emotions as an input and analyse which elements can contribute to making a piece more emotional. There are certain measurable elements in humans that can carry information about the mood at a given moment, such as brain signals. Our work therefore proposes a method that, starting from these signals, first infers the emotion felt using a Machine Learning algorithm. The retrieved emotion is then used to compose a piece of music that expresses a certain emotion, thanks to a Transformer algorithm. In the system we will also use musical features whose values have a variation linked to the variation of the emotions. Brain signal are traduced using EEG system on subject stimulated by videos. The music generated starting by the acquired signals is used as soundtrack for the selected video. In this way the subject become the composer of the soundtrack related to the video is watching, through his/her brainwaves. This study aims to add research elements to the study of emotions in the performance and generation of musical pieces. We have chosen to carry out the acquisition of EEG data using Muse headbands, in collaboration with Fuse*Factory, who have furthered their use during the study for their artistic performances.File | Dimensione | Formato | |
---|---|---|---|
2022_04_Marson.pdf
non accessibile
Descrizione: Tesi
Dimensione
5.51 MB
Formato
Adobe PDF
|
5.51 MB | Adobe PDF | Visualizza/Apri |
2022_04_Marson_ES.pdf
non accessibile
Descrizione: executive summary
Dimensione
742.63 kB
Formato
Adobe PDF
|
742.63 kB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/188575