Voice-based biometric authentication is being increasingly used to secure access to a wide range of services and facilities. However, the rapid advancements in speech deepfake generation technologies, such as Text-to-Speech (TTS) and Voice-Conversion (VC), pose significant risks to Automatic Speaker Verification (ASV) systems. These generative models are able to produce synthetic audio using the voice of target speakers and make them say arbitrary utterances. Such fake signals can be used against ASV systems to fool them and gain unauthorized access to the systems they protect. To counter this threat, deep learning (DL)-based methods are being developed to detect fake speech samples. Although such models show promising results, they are susceptible to adversarial attacks. In the context of audio forensics, adversarial attacks involve introducing subtle perturbations to synthetic audio, rendering the detection model unable to classify the audio correctly. In this thesis, we examine various aspects of adversarial perturbations in speech signals. We start by analyzing the differences between performing attacks in the time domain versus the frequency domain, addressing the challenges related to spectrogram inversion. Then, we investigate the transferability of white-box attacks across multiple models. Finally, we propose a novel ensemble-based approach that is able to attack multiple models simultaneously and discuss its transferability to unseen models.
L'autenticazione biometrica basata sulla voce viene sempre più utilizzata per proteggere l'accesso a una vasta gamma di servizi e strutture. Tuttavia, i rapidi progressi nelle tecnologie di generazione di deepfake vocali, come il Text-to-Speech (TTS) e il Voice-Conversion (VC), rappresentano rischi significativi per i sistemi di Automatic Speaker Verification (ASV). Questi modelli generativi sono in grado di produrre audio sintetico usando la voce di parlanti target e producendo frasi arbitrariamente scelte. Queste possono essere sfruttate per ingannare i sistemi ASV e ottenere accessi non autorizzati ai sistemi protetti. Per contrastare questa minaccia, vengono sviluppati metodi basati sul deep learning (DL) per rilevare i segnali vocali sintetici. Sebbene questi modelli mostrino risultati promettenti, sono vulnerabili agli adversarial attacks. Nel contesto dell'audio forense, gli adversarial attacks consistono nell'introduzione di lievi perturbazioni nei segnali vocali, rendendo il modello classificatore incapace di classificare correttamente gli audio. In questa tesi, analizziamo vari aspetti delle perturbazioni adversarial nei segnali vocali. Iniziamo esaminando le differenze tra l'esecuzione di attacchi nel dominio temporale e nel dominio delle frequenze, affrontando le sfide legate all'inversione dello spettrogramma. Successivamente, studiamo la trasferibilità di attacchi white-box su diversi modelli. Infine, proponiamo un approccio innovativo basato su ensemble, in grado di attaccare simultaneamente più modelli, e discutiamo la sua trasferibilità a modelli non visti.
Adversarial attacks against speech deepfake detectors
Wang, Wendy Edda
2023/2024
Abstract
Voice-based biometric authentication is being increasingly used to secure access to a wide range of services and facilities. However, the rapid advancements in speech deepfake generation technologies, such as Text-to-Speech (TTS) and Voice-Conversion (VC), pose significant risks to Automatic Speaker Verification (ASV) systems. These generative models are able to produce synthetic audio using the voice of target speakers and make them say arbitrary utterances. Such fake signals can be used against ASV systems to fool them and gain unauthorized access to the systems they protect. To counter this threat, deep learning (DL)-based methods are being developed to detect fake speech samples. Although such models show promising results, they are susceptible to adversarial attacks. In the context of audio forensics, adversarial attacks involve introducing subtle perturbations to synthetic audio, rendering the detection model unable to classify the audio correctly. In this thesis, we examine various aspects of adversarial perturbations in speech signals. We start by analyzing the differences between performing attacks in the time domain versus the frequency domain, addressing the challenges related to spectrogram inversion. Then, we investigate the transferability of white-box attacks across multiple models. Finally, we propose a novel ensemble-based approach that is able to attack multiple models simultaneously and discuss its transferability to unseen models.File | Dimensione | Formato | |
---|---|---|---|
2024_12_Wang_Wendy_Edda_Executive_Summary.pdf
accessibile in internet per tutti a partire dal 20/11/2025
Descrizione: 2024 Wang Wendy Executive Summary
Dimensione
558.77 kB
Formato
Adobe PDF
|
558.77 kB | Adobe PDF | Visualizza/Apri |
2024_12_Wang_Wendy_Edda_Thesis.pdf
accessibile in internet per tutti a partire dal 20/11/2025
Descrizione: 2024 Wang Wendy M.Sc. Thesis
Dimensione
2.75 MB
Formato
Adobe PDF
|
2.75 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/230960