The growing adoption of generative models for automatic music production has made it necessary to develop systems capable of distinguishing authentic songs from those generated by artificial intelligence. This work presents a fake music detection system able to classify real and synthetic songs produced by models such as Suno and Udio. The proposed approach is based on convolutional neural networks (ResNet50 and ConvNeXt-Tiny), trained on spectral representations (Mel Spectrograms) extracted from 5 and 120 second audio segments. The dataset used is SONICS, comprising over 97000 tracks, including approximately 49000 AI-generated songs. To handle the presence of unseen generators during inference, an open-set recognition strategy based on confidence thresholds was introduced. Additionally, the effectiveness of vocal-instrumental separation was evaluated using the HTDemucs model, in order to analyze the specific contribution of vocals and musical accompaniment to classification performance. The results show high performance in both closed-set and open-set scenarios, with excellent F1-scores and strong generalization capabilities. The analysis reveals that synthetic songs exhibit distinctive spectral and temporal patterns, particularly in the vocal components, as demonstrated by the higher performance obtained when classifying vocals alone compared to the instrumental parts. The proposed system provides a robust, efficient, and reproducible pipeline for the detection of AI-generated musical content, with potential applications in music analysis, copyright protection, and multimedia forensics.
La crescente diffusione di modelli generativi per la produzione automatica di brani musicali ha reso necessario lo sviluppo di sistemi automatici in grado di distinguere brani autentici da quelli generati artificialmente. In questo lavoro viene presentato un sistema di rilevazione di musica fake, capace di classificare canzoni reali e sintetiche prodotte da modelli come Suno e Udio. L’approccio si basa su reti neurali convoluzionali (ResNet50 e ConvNeXt-Tiny), addestrate su rappresentazioni spettrali (Mel Spectrogram) ottenute da segmenti audio di 5 e 120 secondi. Il dataset utilizzato è SONICS, composto da oltre 97000 brani, di cui circa 49000 generati da AI. Per gestire la presenza di generatori non noti in fase di addestramento, è stata introdotta una strategia di open-set recognition basata su soglie di confidenza. Inoltre, è stata valutata l’efficacia della separazione vocale-strumentale, realizzata con il modello HTDemucs, per analizzare il contributo specifico della voce e dell’accompagnamento musicale nella classificazione. I risultati mostrano elevate performance sia in scenari closed-set che open-set, con ottimi valori di F1-score e buona capacità di generalizzazione. L’analisi rivela che i brani sintetici presentano pattern spettrali e temporali distintivi, in particolare nelle componenti vocali come dimostrato dalle migliori performance ottenute nella classificazione basata sulla sola voce rispetto alla base musicale. Il sistema proposto fornisce una pipeline robusta, efficiente e riproducibile per la rilevazione di contenuti musicali generati da intelligenza artificiale, con applicazioni in ambito musicale, legale e forense.
Fake music detection with open-set recognition and vocal-instrumental separation
Barbera, Sophia
2024/2025
Abstract
The growing adoption of generative models for automatic music production has made it necessary to develop systems capable of distinguishing authentic songs from those generated by artificial intelligence. This work presents a fake music detection system able to classify real and synthetic songs produced by models such as Suno and Udio. The proposed approach is based on convolutional neural networks (ResNet50 and ConvNeXt-Tiny), trained on spectral representations (Mel Spectrograms) extracted from 5 and 120 second audio segments. The dataset used is SONICS, comprising over 97000 tracks, including approximately 49000 AI-generated songs. To handle the presence of unseen generators during inference, an open-set recognition strategy based on confidence thresholds was introduced. Additionally, the effectiveness of vocal-instrumental separation was evaluated using the HTDemucs model, in order to analyze the specific contribution of vocals and musical accompaniment to classification performance. The results show high performance in both closed-set and open-set scenarios, with excellent F1-scores and strong generalization capabilities. The analysis reveals that synthetic songs exhibit distinctive spectral and temporal patterns, particularly in the vocal components, as demonstrated by the higher performance obtained when classifying vocals alone compared to the instrumental parts. The proposed system provides a robust, efficient, and reproducible pipeline for the detection of AI-generated musical content, with potential applications in music analysis, copyright protection, and multimedia forensics.File | Dimensione | Formato | |
---|---|---|---|
2025_07_Barbera_Executive Summary_02.pdf
solo utenti autorizzati a partire dal 25/06/2028
Descrizione: Testo Executive Summary
Dimensione
1.11 MB
Formato
Adobe PDF
|
1.11 MB | Adobe PDF | Visualizza/Apri |
2025_07_Barbera_Tesi_01.pdf
solo utenti autorizzati a partire dal 25/06/2028
Descrizione: Testo tesi
Dimensione
5.99 MB
Formato
Adobe PDF
|
5.99 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/240398