ybercrime is predicted to cost the world 9.5 trillion USD in 2024, according to Cybersecurity Ventures. If measured as a country, cybercrime would be the world's third-largest economy after the U.S. and China. To minimize its impact, organizations must perform effective risk analysis on the growing number of software vulnerabilities disclosed each day. Timely public disclosure and risk assessment are essential, and vulnerability information is typically shared through public databases. The National Vulnerabilities Database (NVD) has been analyzing and prioritizing the disclosed vulnerabilities since the early 2000s. The number of vulnerabilities found every year has been steadily increasing, with a 20% yearly increase in 2024 and NVD is struggling to keep up with the analysis: a backlog of over twenty thousand vulnerabilities has not undergone evaluation. Existing solutions for vulnerability prioritization do not leverage the latest advancements in Natural Language Processing and Large Language Models (LLMs). In this thesis, we are the first to investigate the use of LLMs to predict the vulnerability severity based on its description. We provide a comprehensive overview of the state-of-the-art techniques for vulnerability severity scoring and we evaluate multiple prompt engineering techniques, using different Large Language Models, and compare our approaches to the state-of-the-art techniques. Llama 3 is the best performing model between the evaluated ones (GPT 3, GPT 3.5, Mixtral 7B), achieving a 91% accuracy and a mean absolute error of just 0.66, outperforming all the results presented in the literature. We also evaluated the performance of the state-of-the-art model, SecureBERT, achieving considerably higher results than Aghaei et al., SecureBERT's authors. In addition since every machine learning approach suffers from concept drift, we highlight the importance of addressing this drift and demonstrate strategies to minimize its impact. Finally, we predicted the vulnerability score using the latest CVSS version (CVSS v4) in an exploratory way to show the potential of LLMs in the situation of none-to-little available data, where supervised learning is not feasible.
Secondo Cybersecurity Ventures, nel 2024 la criminalità informatica costerà al mondo 9,5 milioni di milioni di dollari. In termini di PIL, la criminalità informatica sarebbe la terza economia mondiale dopo gli Stati Uniti e la Cina. Per ridurre al minimo tale impatto, le organizzazioni dovrebbero eseguire un’analisi del rischio delle vulnerabilità del software. La condivisione delle vulnerabilità tempestiva e della valutazione del rischio sono essenziali in questo scenario. Queste sono solitamente gestite attraverso database pubblici, come ad esempio il National Vulnerabilities Database (NVD), che analizza e classifica le vulnerabilità divulgate sin dai primi anni 2000. Il numero di vulnerabilità rilevate ogni anno è in costante aumento, con un incremento del 20% tra il 2023 e il 2024. NVD fatica a stare al passo con l’analisi, così un arretrato di oltre ventimila vulnerabilità rimane in attesa di valutazione. Le soluzioni esistenti per la prioritizzazione delle vulnerabilità non sfruttano i più recenti progressi nell’elaborazione del linguaggio naturale e nei modelli linguistici di grandi dimensioni (LLM). In questa tesi, siamo i primi a studiare l’uso degli LLM per prevedere la gravità delle vulnerabilità basandoci sulla loro descrizione. Forniamo una panoramica dello stato dell’arte sulle tecniche di valutazione della gravità delle vulnerabilità e valutiamo diverse tecniche di prompt engineering, utilizzando diversi LLM, e confrontando i nostri approcci con lo stato dell’arte. Llama 3 è il modello più performante tra quelli valutati, ottenendo una precisione del 91% e un errore assoluto medio (MAE) di appena 0,66, superando tutti i risultati presentati in letteratura. Abbiamo anche valutato le prestazioni del modello SecureBERT, ottenendo risultati superiori a quelli di Aghaei et al., i suoi autori. Inoltre, poiché ogni approccio di apprendimento automatico soffre di deriva dei dati, sottolineiamo l’importanza di occuparsi di questa deriva e dimostriamo alcune strategie per minimizzarne l’impatto. Infine, abbiamo previsto il punteggio di vulnerabilità utilizzando l’ultima versione del CVSS (CVSS v4) in modo esplorativo per mostrare il potenziale degli LLM nella situazione di mancanza o scarsità di dati disponibili, dove realizzare l’apprendimento supervisionato non è possibile.
AUTOCVSS: automatic methods for vulnerabilty prioritization
Arriciati, Giovanni
2024/2025
Abstract
ybercrime is predicted to cost the world 9.5 trillion USD in 2024, according to Cybersecurity Ventures. If measured as a country, cybercrime would be the world's third-largest economy after the U.S. and China. To minimize its impact, organizations must perform effective risk analysis on the growing number of software vulnerabilities disclosed each day. Timely public disclosure and risk assessment are essential, and vulnerability information is typically shared through public databases. The National Vulnerabilities Database (NVD) has been analyzing and prioritizing the disclosed vulnerabilities since the early 2000s. The number of vulnerabilities found every year has been steadily increasing, with a 20% yearly increase in 2024 and NVD is struggling to keep up with the analysis: a backlog of over twenty thousand vulnerabilities has not undergone evaluation. Existing solutions for vulnerability prioritization do not leverage the latest advancements in Natural Language Processing and Large Language Models (LLMs). In this thesis, we are the first to investigate the use of LLMs to predict the vulnerability severity based on its description. We provide a comprehensive overview of the state-of-the-art techniques for vulnerability severity scoring and we evaluate multiple prompt engineering techniques, using different Large Language Models, and compare our approaches to the state-of-the-art techniques. Llama 3 is the best performing model between the evaluated ones (GPT 3, GPT 3.5, Mixtral 7B), achieving a 91% accuracy and a mean absolute error of just 0.66, outperforming all the results presented in the literature. We also evaluated the performance of the state-of-the-art model, SecureBERT, achieving considerably higher results than Aghaei et al., SecureBERT's authors. In addition since every machine learning approach suffers from concept drift, we highlight the importance of addressing this drift and demonstrate strategies to minimize its impact. Finally, we predicted the vulnerability score using the latest CVSS version (CVSS v4) in an exploratory way to show the potential of LLMs in the situation of none-to-little available data, where supervised learning is not feasible.| File | Dimensione | Formato | |
|---|---|---|---|
|
2025_07_Arriciati_Executive_Summary_02.pdf
non accessibile
Descrizione: Executive Summary
Dimensione
766.81 kB
Formato
Adobe PDF
|
766.81 kB | Adobe PDF | Visualizza/Apri |
|
2025_07_Arriciati_Tesi_01.pdf
non accessibile
Descrizione: Thesis main text
Dimensione
16.66 MB
Formato
Adobe PDF
|
16.66 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/240198