Artificial Intelligence is growing rapidly in a highly interconnected world, providing solutions to problems that were unimaginable just a few years ago, while at the same time opening the door to existential risks and dangers for humanity. Keeping its development under control and respecting the individual is one of the main goals of Human-Centred Artificial Intelligence, a branch of computer science that has emerged in the last decade and aims to make the research, production and use of Artificial Intelligence algorithms transparent, credible, safe and ethical. With the advent of cyber-attacks against such algorithms, regulation and protection have become imperative. Through the use of certain Artificial Intelligence models, it is indeed possible to extract the information learned by third party algorithms, showing how the training data is present in these architectures, albeit in the form of a latent representation. Data, whatever its nature or form, is thus one of the most important and debated resources, being on the one hand the essential ingredient for learning algorithms, and on the other hand an asset to be protected and kept private. This thesis begins by examining the current landscape of privacy preservation techniques in deep learning, revealing significant challenges in balancing model performance with data protection. Existing methods, including Differential Privacy, often result in substantial compromises with respect to privacy guarantees and model performance, limiting their practical application in real-world scenarios. In response to these challenges, this research introduces a series of novel contributions aimed at enhancing both privacy and performance in deep learning systems. Initially, it explores regularisation techniques as a means to improve privacy protection whilst maintaining model performance. This approach proves to be a promising alternative to more computationally intensive methods, offering a better balance between privacy and utility. Building upon this foundation, the work presents Discriminative Adversarial Privacy (DAP), a new strategy that leverages adversarial training to simultaneously optimise for task performance and privacy protection. This approach demonstrates significant improvements over traditional methods, offering a more favourable balance between model accuracy and privacy guarantees. The thesis then investigates the potential of federated learning as a privacy-preserving technique for collaborative model development. Recognising the vulnerabilities inherent in traditional approaches, it proposes Synthetic Generative Data Exchange (SGDE). This innovative method leverages generative models to produce synthetic data for exchange within a federated learning context, significantly enhancing privacy protections whilst maintaining or even improving model performance. Expanding on the concept of synthetic data, a comprehensive pipeline called Gap Filler (GaFi) is developed to optimise the quality and utility of synthetic datasets for downstream tasks. This approach significantly narrows the performance gap between models trained on synthetic versus real-world data across various domains. Additionally, the research explores the adaptation of Stable Diffusion 2.0 for synthetic dataset generation, incorporating techniques such as transfer learning and fine-tuning. Building upon these advancements, the Knowledge Recycling (KR) pipeline is introduced, which integrates and refines the insights from GaFi and the Stable Diffusion experiments. KR employs advanced generative techniques to further enhance the effectiveness of synthetic data in model training, demonstrating its potential to surpass real data in certain scenarios. In the context of collaborative learning, this research proposes Federated Knowledge Recycling (FedKR). This novel approach enables secure and effective collaboration across institutions without compromising data privacy. By leveraging locally generated synthetic data and sophisticated aggregation mechanisms, it offers enhanced security and improved model performance compared to traditional federated learning techniques. In conclusion, this thesis presents a series of methodologies and techniques that contribute to the ongoing development of privacy-preserving deep learning. The proposed approaches offer potential solutions to some of the current challenges in balancing data utility and privacy in machine learning applications.
L’Artificial Intelligence sta crescendo rapidamente in un mondo altamente interconnesso, offrendo soluzioni a problemi che solo pochi anni fa erano inimmaginabili, ma allo stesso tempo aprendo la porta a rischi esistenziali e pericoli per l’umanità. Mantenere il suo sviluppo sotto controllo e nel rispetto dell’individuo è uno dei principali obiettivi della Human-Centred Artificial Intelligence, un ramo dell’informatica emerso nell’ultimo decennio e volto a rendere trasparenti, credibili, sicuri ed etici la ricerca, la produzione e l’uso degli algoritmi di Artificial Intelligence. Con l’avvento di cyber-attacks diretti a tali algoritmi, la regolamentazione e la protezione sono diventate imperative. Attraverso l’uso di specifici modelli di Artificial Intelligence, è infatti possibile estrarre le informazioni apprese da algoritmi di terze parti, dimostrando come i dati di training siano presenti in queste architetture, sebbene in forma di rappresentazione latente. I dati, qualunque sia la loro natura o forma, rappresentano dunque una delle risorse più importanti e dibattute: da un lato ingrediente essenziale per gli algoritmi di apprendimento, dall’altro bene da proteggere e mantenere privato. Questa tesi inizia esaminando il panorama attuale delle tecniche di privacy preservation nel deep learning, rivelando sfide significative nel bilanciare la performance del modello con la tutela dei dati. I metodi esistenti, inclusi quelli basati su Differential Privacy, portano spesso a compromessi sostanziali sia sulle garanzie di privacy sia sulle prestazioni del modello, limitandone l’applicabilità pratica in scenari reali. In risposta a queste sfide, la presente ricerca introduce una serie di contributi innovativi volti a migliorare sia la privacy sia le performance nei sistemi di deep learning. Inizialmente esplora tecniche di regularisation come mezzo per rafforzare la protezione della privacy mantenendo contestualmente la performance del modello. Questo approccio si dimostra un’alternativa promettente a metodi più intensivi dal punto di vista computazionale, offrendo un miglior bilanciamento tra privacy e utility. Sulla base di questo, il lavoro presenta Discriminative Adversarial Privacy (DAP), una nuova strategia che sfrutta l’adversarial training per ottimizzare simultaneamente le prestazioni sul compito e la protezione della privacy. Questo approccio mostra miglioramenti significativi rispetto ai metodi tradizionali, offrendo un bilanciamento più favorevole tra accuratezza del modello e garanzie di privacy. La tesi prosegue investigando il potenziale del federated learning come tecnica di privacy preservation per lo sviluppo collaborativo di modelli. Riconoscendo le vulnerabilità insite negli approcci tradizionali, propone Synthetic Generative Data Exchange (SGDE). Questo metodo innovativo utilizza modelli generativi per produrre dati sintetici da scambiare in un contesto federated learning, migliorando notevolmente le garanzie di privacy e mantenendo o addirittura migliorando le performance del modello. Ampliando il concetto di dati sintetici, è stata sviluppata una pipeline completa chiamata Gap Filler (GaFi) per ottimizzare la qualità e l’utilità dei dataset sintetici per compiti downstream. Questo approccio riduce in modo significativo il divario di performance tra modelli addestrati su dati sintetici rispetto ai dati reali in vari domini. Inoltre, la ricerca esplora l’adattamento di Stable Diffusion 2.0 per la generazione di dataset sintetici, incorporando tecniche come transfer learning e fine-tuning. Sulla scia di questi avanzamenti, viene introdotta la pipeline Knowledge Recycling (KR), che integra e affina i risultati di GaFi e degli esperimenti con Stable Diffusion. KR impiega tecniche generative avanzate per potenziare ulteriormente l’efficacia dei dati sintetici nell’addestramento dei modelli, dimostrando il potenziale di superare i dati reali in taluni scenari. Nel contesto del collaborative learning, questa ricerca propone Federated Knowledge Recycling (FedKR). Questo approccio innovativo consente una collaborazione sicura ed efficace tra istituzioni senza compromettere la privacy dei dati. Sfruttando dati sintetici generati localmente e meccanismi di aggregazione sofisticati, offre maggiore sicurezza e performance del modello rispetto alle tecniche tradizionali di federated learning. In conclusione, questa tesi presenta una serie di metodologie e tecniche che contribuiscono allo sviluppo della privacy-preserving deep learning. Gli approcci proposti offrono potenziali soluzioni ad alcune delle sfide attuali nel bilanciare data utility e privacy nelle applicazioni di machine learning.
Adversarial and generative deep learning for data privacy in human-centered artificial intelligence
LOMURNO, EUGENIO
2024/2025
Abstract
Artificial Intelligence is growing rapidly in a highly interconnected world, providing solutions to problems that were unimaginable just a few years ago, while at the same time opening the door to existential risks and dangers for humanity. Keeping its development under control and respecting the individual is one of the main goals of Human-Centred Artificial Intelligence, a branch of computer science that has emerged in the last decade and aims to make the research, production and use of Artificial Intelligence algorithms transparent, credible, safe and ethical. With the advent of cyber-attacks against such algorithms, regulation and protection have become imperative. Through the use of certain Artificial Intelligence models, it is indeed possible to extract the information learned by third party algorithms, showing how the training data is present in these architectures, albeit in the form of a latent representation. Data, whatever its nature or form, is thus one of the most important and debated resources, being on the one hand the essential ingredient for learning algorithms, and on the other hand an asset to be protected and kept private. This thesis begins by examining the current landscape of privacy preservation techniques in deep learning, revealing significant challenges in balancing model performance with data protection. Existing methods, including Differential Privacy, often result in substantial compromises with respect to privacy guarantees and model performance, limiting their practical application in real-world scenarios. In response to these challenges, this research introduces a series of novel contributions aimed at enhancing both privacy and performance in deep learning systems. Initially, it explores regularisation techniques as a means to improve privacy protection whilst maintaining model performance. This approach proves to be a promising alternative to more computationally intensive methods, offering a better balance between privacy and utility. Building upon this foundation, the work presents Discriminative Adversarial Privacy (DAP), a new strategy that leverages adversarial training to simultaneously optimise for task performance and privacy protection. This approach demonstrates significant improvements over traditional methods, offering a more favourable balance between model accuracy and privacy guarantees. The thesis then investigates the potential of federated learning as a privacy-preserving technique for collaborative model development. Recognising the vulnerabilities inherent in traditional approaches, it proposes Synthetic Generative Data Exchange (SGDE). This innovative method leverages generative models to produce synthetic data for exchange within a federated learning context, significantly enhancing privacy protections whilst maintaining or even improving model performance. Expanding on the concept of synthetic data, a comprehensive pipeline called Gap Filler (GaFi) is developed to optimise the quality and utility of synthetic datasets for downstream tasks. This approach significantly narrows the performance gap between models trained on synthetic versus real-world data across various domains. Additionally, the research explores the adaptation of Stable Diffusion 2.0 for synthetic dataset generation, incorporating techniques such as transfer learning and fine-tuning. Building upon these advancements, the Knowledge Recycling (KR) pipeline is introduced, which integrates and refines the insights from GaFi and the Stable Diffusion experiments. KR employs advanced generative techniques to further enhance the effectiveness of synthetic data in model training, demonstrating its potential to surpass real data in certain scenarios. In the context of collaborative learning, this research proposes Federated Knowledge Recycling (FedKR). This novel approach enables secure and effective collaboration across institutions without compromising data privacy. By leveraging locally generated synthetic data and sophisticated aggregation mechanisms, it offers enhanced security and improved model performance compared to traditional federated learning techniques. In conclusion, this thesis presents a series of methodologies and techniques that contribute to the ongoing development of privacy-preserving deep learning. The proposed approaches offer potential solutions to some of the current challenges in balancing data utility and privacy in machine learning applications.File | Dimensione | Formato | |
---|---|---|---|
Adversarial_and_Generative_Deep_Learning_for_Data_Privacy_in_Human_Centered_Artificial_Intelligence.pdf
accessibile in internet per tutti
Descrizione: Tesi di dottorato di Eugenio Lomurno
Dimensione
8.34 MB
Formato
Adobe PDF
|
8.34 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/238117