In recent years, Large Language Models (LLMs) have made remarkable advancements in generating accurate and coherent text. However, their security remains underexplored, leaving a significant and alarming gap. Previous research has identified critical vulnerabilities, particularly in the area of privacy. This thesis addresses the issue by designing and implementing an innovative framework to enhance the security of LLMs, tailored for real-world applications and industrial settings. We introduce LeakSealer, a unified and lightweight framework for detecting toxic content and privacy leakage in LLM conversations. Unlike most state-of-the-art methods, LeakSealer employs traditional machine learning techniques combined with human-in-the-loop learning, offering a novel and practical approach to securing LLM interactions. Furthermore, our approach is entirely black-box and model-agnostic, and it allows the identification of unknown attack patterns. LeakSealer is designed to operate in two settings: a static scenario, where it processes batches of historical conversation logs to detect potential security breaches and provides detailed reports to users, and a dynamic operational context, where it receives conversations in a streaming fashion, classifying them as malicious or safe. This dual capability aims to enhance the robustness and security of LLMs. Experimental results on two real-world datasets and one synthetically generated dataset demonstrate LeakSealer's effectiveness in detecting toxic content and Personally Identifiable Information (PII) leakage. Notably, it outperforms several state-of-the-art models, underscoring its potential as a robust and practical solution for enhancing LLM security.
Negli ultimi anni, i Large Language Models (LLM) hanno fatto notevoli progressi nella generazione di testi accurati e coerenti. Tuttavia, la loro sicurezza rimane poco esplorata, lasciando una lacuna significativa e allarmante. Ricerche precedenti hanno identificato vulnerabilità critiche, in particolare nell'area della privacy. Questa tesi affronta il problema progettando e implementando un framework innovativo per migliorare la sicurezza degli LLM, adattato a applicazioni reali e a contesti industriali. Presentiamo LeakSealer, un framework unificato e leggero per rilevare contenuti tossici e fughe di dati riservati nelle conversazioni LLM. A differenza della maggior parte dei metodi più avanzati, LeakSealer impiega tecniche tradizionali di machine learning, combinate con human-in-the-loop learning, offrendo un approccio nuovo e pratico alla sicurezza delle interazioni con gli LLM. Inoltre, l'approccio proposto è completamente black-box, indipendente dal modello analizzato e consente l'identificazione di attacco precedentemente sconosciuti. LeakSealer è stato progettato per operare in due contesti: uno scenario statico, in cui elabora uno storico di conversazioni per rilevare potenziali violazioni della sicurezza e fornisce rapporti dettagliati agli utenti; e un contesto operativo dinamico, in cui riceve conversazioni in streaming, classificandole come dannose o sicure. Questa doppia capacità mira a migliorare la robustezza e la sicurezza degli LLM. I risultati sperimentali su due dataset reali e su un insieme di dati generati sinteticamente dimostrano l'efficacia di LeakSealer nel rilevare contenuti tossici e divulgazioni di informazioni personali (PII). In particolare, LeakSealer supera diversi modelli all'avanguardia, sottolineando il suo potenziale come soluzione robusta e pratica per migliorare la sicurezza degli LLM.
LeakSealer: a unified framework for detecting toxic content and privacy leakage in LLM
Bonfanti, Stefano
2023/2024
Abstract
In recent years, Large Language Models (LLMs) have made remarkable advancements in generating accurate and coherent text. However, their security remains underexplored, leaving a significant and alarming gap. Previous research has identified critical vulnerabilities, particularly in the area of privacy. This thesis addresses the issue by designing and implementing an innovative framework to enhance the security of LLMs, tailored for real-world applications and industrial settings. We introduce LeakSealer, a unified and lightweight framework for detecting toxic content and privacy leakage in LLM conversations. Unlike most state-of-the-art methods, LeakSealer employs traditional machine learning techniques combined with human-in-the-loop learning, offering a novel and practical approach to securing LLM interactions. Furthermore, our approach is entirely black-box and model-agnostic, and it allows the identification of unknown attack patterns. LeakSealer is designed to operate in two settings: a static scenario, where it processes batches of historical conversation logs to detect potential security breaches and provides detailed reports to users, and a dynamic operational context, where it receives conversations in a streaming fashion, classifying them as malicious or safe. This dual capability aims to enhance the robustness and security of LLMs. Experimental results on two real-world datasets and one synthetically generated dataset demonstrate LeakSealer's effectiveness in detecting toxic content and Personally Identifiable Information (PII) leakage. Notably, it outperforms several state-of-the-art models, underscoring its potential as a robust and practical solution for enhancing LLM security.File | Dimensione | Formato | |
---|---|---|---|
2024_12_Bonfanti_Executive_Summary.pdf
non accessibile
Descrizione: Executive Summary
Dimensione
461.48 kB
Formato
Adobe PDF
|
461.48 kB | Adobe PDF | Visualizza/Apri |
2024_12_Bonfanti_Thesis.pdf
non accessibile
Descrizione: Thesis
Dimensione
1.13 MB
Formato
Adobe PDF
|
1.13 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/231164