Timbre transfer is one of the new frontiers in future applications of neural models for audio processing systems. The complexity of high-fidelity timbre synthesis conditioned on a melody is not feasible with traditional DSP algorithms and recently new data-driven approaches based on Diffusion models, Autoencoders and GANs have been proposed, capable of generating high-quality audio but with high computational costs and slow training time. This research introduces Mam- baTransfer, a new model based on Selective State Space Models (Mamba), designed to work directly with raw audio at 48Khz and aims to increase computational effi- ciency while maintaining high audio fidelity. The proposed approach segments the input audio waveform into subframes which are then transposed to be used as an input matrix for the Mamba block. The model was trained on the Starnet dataset which contains pairs of audio recordings of melodies aligned across different instru- ments. Performance was evaluated using the Frechet Audio Distance and compared to previous models showing how MambaTransfer outperforms Universal Network but falls slightly behind DiffTransfer. Despite this, MambaTransfer achieves sig- nificant computational advantages, requiring less than 1GB of GPU memory, train time less than a day, and 65x realtime inference speed. This makes the MambaT- ransfer approach promising and motivates further studies aimed at improving its generation quality, in particular by resolving artifacts at the edges of the segmented audio frames.
Il Trasferimento Timbrico è una delle nuove frontiere delle future applicazioni per i modelli neurali nei sistemi di elaborazione audio. La complessità della sintesi timbrica ad alta fedeltà condizionata da una melodia si rivela eccessiva per gli algoritmi DSP tradizionali e recentemente sono stati proposti nuovi approcci basati sui dati: modelli Diffusion, Autoencoder e GAN, in grado di generare audio di alta qualità ma con elevati costi computazionali e tempi di addestramento lenti. Questa ricerca introduce MambaTransfer, un nuovo modello basato su Selective State Space Models (Mamba), progettato per funzionare direttamente su audio grezzo a 48 kHz e mira ad aumentare l’efficienza computazionale mantenendo un’elevata fedeltà audio. L’approccio proposto segmenta la forma d’onda audio in ingresso in frames che vengono poi trasposti per essere utilizzati come matrice di input per il blocco Mamba. Il modello è stato addestrato sul set di dati Starnet, che contiene coppie di registrazioni audio con melodie allineate su diversi strumenti. Le prestazioni sono state valutate utilizzando Frechet Audio Distance e confrontate con i modelli precedenti, ottenendo risultati all’avanguardia, dimostrando come MambaTransfer superi Universal Network ma sia leg- germente inferiore a DiffTransfer. Nonostante ciò, MambaTransfer ottiene notevoli vantaggi computazionali, richiedendo meno di 1 GB di memoria GPU, tempi di esecuzione inferiori a un giorno e una velocità di inferenza in tempo reale 65 volte superiore. L’approccio MambaTransfer si rivela quindi promettente e stimola ulteriori studi volti a migliorarne la qualità di generazione, in particolare risolvendo le irregolarità ai bordi dei frames in uscita che causano un lieve effetto di tremolo sui suoni generati
MambaTransfer: raw audio musical timbre transfer using selective state-space models
FRATTICIOLI, GUGLIELMO
2023/2024
Abstract
Timbre transfer is one of the new frontiers in future applications of neural models for audio processing systems. The complexity of high-fidelity timbre synthesis conditioned on a melody is not feasible with traditional DSP algorithms and recently new data-driven approaches based on Diffusion models, Autoencoders and GANs have been proposed, capable of generating high-quality audio but with high computational costs and slow training time. This research introduces Mam- baTransfer, a new model based on Selective State Space Models (Mamba), designed to work directly with raw audio at 48Khz and aims to increase computational effi- ciency while maintaining high audio fidelity. The proposed approach segments the input audio waveform into subframes which are then transposed to be used as an input matrix for the Mamba block. The model was trained on the Starnet dataset which contains pairs of audio recordings of melodies aligned across different instru- ments. Performance was evaluated using the Frechet Audio Distance and compared to previous models showing how MambaTransfer outperforms Universal Network but falls slightly behind DiffTransfer. Despite this, MambaTransfer achieves sig- nificant computational advantages, requiring less than 1GB of GPU memory, train time less than a day, and 65x realtime inference speed. This makes the MambaT- ransfer approach promising and motivates further studies aimed at improving its generation quality, in particular by resolving artifacts at the edges of the segmented audio frames.File | Dimensione | Formato | |
---|---|---|---|
2025_04_Fratticioli_Tesi.pdf
accessibile in internet per tutti
Descrizione: Tesi
Dimensione
7.25 MB
Formato
Adobe PDF
|
7.25 MB | Adobe PDF | Visualizza/Apri |
2025_04_Fratticioli_Executive_Summary.pdf
accessibile in internet per tutti
Dimensione
2.19 MB
Formato
Adobe PDF
|
2.19 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/235141