Large language models (LLMs) have radically transformed the artificial intelligence scenario, achieving unprecedented performance in understanding and generating natural language. Despite these results, their effectiveness remains closely dependent on the quality of the documentary sources used, a critical point that has yet to be thoroughly investigated in scientific research, especially in the field of Retrieval-Augmented Generation (RAG) systems. This thesis develops a Quality Model to objectively measure document quality and quantify its impact on LLM performance in Question Answering systems. A controlled experimental approach was adopted: a variety of errors were introduced into the documents, and the responses generated by GPT-4o-mini and Gemini 2.5 Flash were evaluated using quantitative (NumReplicaErrors, NumAddition) and qualitative (Accuracy, Completeness, Key Concept Coverage, Conciseness) metrics, defined and implemented specifically for this purpose. The results highlight significant differences between models, with varying adaptation strategies and levels of robustness when faced with degraded documents, allowing the derivation of operational guidelines for the selection and deployment of LLMs. The research shows that robustness is not uniform across models and that the choice of LLM should consider the expected level of document reliability, highlighting the need for preventive quality assessment mechanisms in critical domains.
I modelli linguistici di grandi dimensioni (LLM) hanno trasformato radicalmente il panorama dell'intelligenza artificiale, raggiungendo prestazioni senza precedenti nella comprensione e generazione del linguaggio naturale. Nonostante questi risultati, la loro efficacia rimane strettamente vincolata alla qualità delle fonti documentali utilizzate, un punto critico ancora poco indagato nella ricerca scientifica, specialmente nell'ambito dei sistemi di Retrieval-Augmented Generation (RAG). La presente tesi sviluppa un Quality Model per misurare oggettivamente la qualità documentale e quantificarne l'impatto sulle prestazioni degli LLM nei sistemi di Question Answering. È stato adottato un approccio sperimentale controllato: sono stati introdotti errori di diversa natura nei documenti, e le risposte generate da GPT-4o-mini e Gemini 2.5 Flash sono state valutate tramite metriche quantitative (NumReplicaErrors, NumAddition) e qualitative (Accuracy, Completeness, Key Concept Coverage, Conciseness), definite e implementate appositamente per questo scopo. I risultati evidenziano differenze significative tra i modelli, con strategie di adattamento e livelli di robustezza diversi di fronte a documenti degradati, permettendo di derivare linee guida operative per la selezione e il deployment degli LLM. La ricerca dimostra che la robustezza non è uniforme tra i modelli e che la scelta dell'LLM dovrebbe considerare il livello di affidabilità documentale atteso, evidenziando la necessità di meccanismi di quality assessment preventivo in domini critici.
Impact of document quality on retrieval-augmented LLMs
Gallo, Elisa;GOTTARDI, ARIANNA
2024/2025
Abstract
Large language models (LLMs) have radically transformed the artificial intelligence scenario, achieving unprecedented performance in understanding and generating natural language. Despite these results, their effectiveness remains closely dependent on the quality of the documentary sources used, a critical point that has yet to be thoroughly investigated in scientific research, especially in the field of Retrieval-Augmented Generation (RAG) systems. This thesis develops a Quality Model to objectively measure document quality and quantify its impact on LLM performance in Question Answering systems. A controlled experimental approach was adopted: a variety of errors were introduced into the documents, and the responses generated by GPT-4o-mini and Gemini 2.5 Flash were evaluated using quantitative (NumReplicaErrors, NumAddition) and qualitative (Accuracy, Completeness, Key Concept Coverage, Conciseness) metrics, defined and implemented specifically for this purpose. The results highlight significant differences between models, with varying adaptation strategies and levels of robustness when faced with degraded documents, allowing the derivation of operational guidelines for the selection and deployment of LLMs. The research shows that robustness is not uniform across models and that the choice of LLM should consider the expected level of document reliability, highlighting the need for preventive quality assessment mechanisms in critical domains.| File | Dimensione | Formato | |
|---|---|---|---|
|
2025_12_Gottardi_Gallo_ExecutiveSummary_01.pdf
accessibile in internet per tutti
Descrizione: Executive Summary
Dimensione
4.83 MB
Formato
Adobe PDF
|
4.83 MB | Adobe PDF | Visualizza/Apri |
|
2025_12_Gottardi_Gallo_Tesi_01.pdf
accessibile in internet per tutti
Descrizione: Testo della Tesi
Dimensione
41.81 MB
Formato
Adobe PDF
|
41.81 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/246597