In recent years, the field of Machine Learning (ML) and Deep Learning (DL) has witnessed exponential growth, thanks to significant advances in hardware and the availability of large datasets. ML and DL algorithms, which form a crucial subset of Artificial Intelligence (AI), have demonstrated remarkable success across a range of domains such as computer vision, natural language processing, and time-series analysis. This widespread adoption has led to the rise of the “as-a-service” model, where powerful ML and DL capabilities are offered to users without requiring technical ex pertise. However, this model introduces serious concerns regarding privacy, especially when sensitive user data is involved, such as in healthcare or personal information processing. The need to balance the utility of these algorithms with stringent privacy requirements is becoming increasingly pressing. A promising solution to this challenge lies in the application of Homomorphic Encryption (HE), a cryptographic technique that enables computations on encrypted data. This allows third parties to process data without ever accessing the underlying plaintext, preserving user privacy. However, the integration of HE with ML and DL introduces significant obstacles. Current HE schemes are limited in the types of operations they can support and the depth of the computation pipelines they can handle. Additionally, the overhead introduced by performing computations on encrypted data is substantial, both in terms of time and memory requirements, which necessitates the development of optimized and scalable solutions. Beyond these technical challenges, the use of privacy-preserving techniques such as HE raises important ethical questions. Privacy is a critical ethical value, but it is not the only one. The design of AI systems must also consider other values like transparency, fairness, and beneficence. The process of learning from encrypted data presents unique ethical dilemmas, where the prioritization of privacy may come into conflict with these other values. This thesis proposes a comprehensive methodology that addresses the algorithmic, technological, and ethical challenges of privacy-preserving ML and DL. It begins by exploring the algorithmic limitations posed by HE and proposing solutions for adapting or redesigning existing ML and DL architectures to operate within these constraints. The thesis then tackles the technological challenge, introducing cloud-based platforms designed to manage the heavy computational demands of HE-based algorithms. Finally, it extends the ethical analysis beyond privacy, highlighting the trade-offs necessary for the responsible design of privacy-preserving AI systems. The thesis is organized into four main parts. The introductory section provides the background on HE and the formulation of the research problem, setting the foundation for the subsequent sections. The second part addresses the algorithmic challenges, presenting new methodologies for building HE-compatible ML and DL models. The third part focuses on the technological aspects, proposing platforms that optimize the performance of these models in real-world environments. The fourth part delves into the ethical implications, discussing the intersection of privacy with other ethical considerations in AI system design. Finally, the thesis concludes with reflections on the current state of privacy-preserving ML and DL, and outlines directions for future research.
Negli ultimi anni, il campo del Machine Learning (ML) e del Deep Learning (DL) ha visto una crescita esponenziale, guidata dai significativi progressi nel- l’hardware e dalla disponibilità di grandi dataset. Gli algoritmi di ML e DL, che costituiscono un sottoinsieme cruciale dell’Intelligenza Artificiale (AI), hanno di- mostrato un successo notevole in una vasta gamma di domini come l’elaborazione di immagini, di linguaggio naturale e l’analisi delle serie temporali. Questa diffusione ha portato alla nascita del modello “as-a-service”, in cui potenti soluzioni basate su ML e DL sono offerte agli utenti senza richiedere competenze tecniche. Tuttavia, questo modello introduce serie preoccupazioni in merito alla privacy, specialmente quando vengono trattati dati sensibili degli utenti, come informazioni sanitarie o personali. La necessità di bilanciare l’utilità di questi algoritmi con rigorosi requisiti di privacy sta diventando sempre più pressante. Una soluzione promettente a questa sfida risiede nell’applicazione della Crittografia Omomorfica (Homomorphic Encryption, HE), una tecnica crittografica che consente di eseguire calcoli su dati criptati. In questo modo, terze parti possono elaborare dati criptati senza mai accedere ai dati in chiaro sottostanti, preservando così la privacy degli utenti. Tuttavia, l’integrazione dell’HE con il ML e il DL presenta notevoli ostacoli. Gli attuali schemi di HE sono limitati nel tipo di operazioni che possono supportare e nella profondità dei circuiti computazionali che possono gestire. Inoltre, l’incremento in consumo di potenza computazionale e memoria introdotto dall’esecuzione di calcoli su dati criptati è considerevole, il che rende necessaria la creazione di soluzioni ottimizzate e scalabili. Oltre a queste sfide, l’uso di tecniche per la preservazione della privacy, come l’HE, solleva importanti questioni etiche. La privacy è un valore etico fondamentale, ma non è l’unico. Il design dei sistemi di AI deve considerare anche altri valori, come la trasparenza, l’equità e la beneficenza. Il processo di apprendimento da dati criptati presenta dilemmi etici unici, in cui la priorità della privacy può entrare in conflitto con questi altri valori. Questa tesi propone una metodologia completa che affronta le sfide algoritmiche, tecnologiche ed etiche del ML e DL orientato alla preservazione della privacy. La prima parte esplora i limiti algoritmici imposti dall’HE e propone soluzioni per adattare o riprogettare le architetture esistenti di ML e DL affinché operino all’interno di questi vincoli. La tesi affronta poi la sfida tecnologica, introducendo piattaforme basate sul cloud progettate per gestire le pesanti esigenze computazionali degli algoritmi basati su HE. Infine, la tesi estende l’analisi etica oltre la privacy, evidenziando i compromessi necessari per un design responsabile dei sistemi di AI che preservano la privacy. La tesi è organizzata in quattro parti principali. La sezione introduttiva fornisce il background sull’HE e la formulazione del problema di ricerca, ponendo le basi per le sezioni successive. La seconda parte affronta le sfide algoritmiche, presentando nuove metodologie per la costruzione di modelli di ML e DL compatibili con l’HE. La terza parte si concentra sugli aspetti tecnologici, proponendo piattaforme che ottimizzano le prestazioni di questi modelli in ambienti reali. La quarta parte approfondisce le implicazioni etiche, discutendo l’intersezione della privacy con altri valori etici nel design dei sistemi di AI. Infine, la tesi si conclude con riflessioni sullo stato attuale delML e DL orientato alla preservazione della privacy e delinea le direzioni per la ricerca futura.
Privacy-preserving deep learning: algorithms, technologies, and ethics
FALCETTA, ALESSANDRO
2024/2025
Abstract
In recent years, the field of Machine Learning (ML) and Deep Learning (DL) has witnessed exponential growth, thanks to significant advances in hardware and the availability of large datasets. ML and DL algorithms, which form a crucial subset of Artificial Intelligence (AI), have demonstrated remarkable success across a range of domains such as computer vision, natural language processing, and time-series analysis. This widespread adoption has led to the rise of the “as-a-service” model, where powerful ML and DL capabilities are offered to users without requiring technical ex pertise. However, this model introduces serious concerns regarding privacy, especially when sensitive user data is involved, such as in healthcare or personal information processing. The need to balance the utility of these algorithms with stringent privacy requirements is becoming increasingly pressing. A promising solution to this challenge lies in the application of Homomorphic Encryption (HE), a cryptographic technique that enables computations on encrypted data. This allows third parties to process data without ever accessing the underlying plaintext, preserving user privacy. However, the integration of HE with ML and DL introduces significant obstacles. Current HE schemes are limited in the types of operations they can support and the depth of the computation pipelines they can handle. Additionally, the overhead introduced by performing computations on encrypted data is substantial, both in terms of time and memory requirements, which necessitates the development of optimized and scalable solutions. Beyond these technical challenges, the use of privacy-preserving techniques such as HE raises important ethical questions. Privacy is a critical ethical value, but it is not the only one. The design of AI systems must also consider other values like transparency, fairness, and beneficence. The process of learning from encrypted data presents unique ethical dilemmas, where the prioritization of privacy may come into conflict with these other values. This thesis proposes a comprehensive methodology that addresses the algorithmic, technological, and ethical challenges of privacy-preserving ML and DL. It begins by exploring the algorithmic limitations posed by HE and proposing solutions for adapting or redesigning existing ML and DL architectures to operate within these constraints. The thesis then tackles the technological challenge, introducing cloud-based platforms designed to manage the heavy computational demands of HE-based algorithms. Finally, it extends the ethical analysis beyond privacy, highlighting the trade-offs necessary for the responsible design of privacy-preserving AI systems. The thesis is organized into four main parts. The introductory section provides the background on HE and the formulation of the research problem, setting the foundation for the subsequent sections. The second part addresses the algorithmic challenges, presenting new methodologies for building HE-compatible ML and DL models. The third part focuses on the technological aspects, proposing platforms that optimize the performance of these models in real-world environments. The fourth part delves into the ethical implications, discussing the intersection of privacy with other ethical considerations in AI system design. Finally, the thesis concludes with reflections on the current state of privacy-preserving ML and DL, and outlines directions for future research.File | Dimensione | Formato | |
---|---|---|---|
AlessandroFalcettaPhdThesis_V2.pdf
non accessibile
Dimensione
7.36 MB
Formato
Adobe PDF
|
7.36 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/233093