Code smells are symptoms of poor design and implementation choices in software that can undermine the maintanability, readability and reusability of the code. Over the past years, the detection of code smells represented a critical activity for ensuring the overall quality of software systems. Traditional Static Analysis Tools (SATs), while effective to some extent, often fail to capture deeper design flaws, limiting their usefulness in real-world scenarios. Recent advances in Large Language Models (LLMs) have opened new opportunities for leveraging natural language processing techniques in automated code analysis. This thesis investigates the use of LLMs for code smell detection, comparing their performance with that of SATs. Two public datasets (MLCQ and DACOS) were employed to evaluate various models and comparing different prompting strategies (zero-shot, few-shot, and chain-of-thought), as well as experimenting with fine-tuning on lightweight models. The results suggested that LLMs can complement, and in some cases replace, SATs in detecting code smells. Based on these results, a prototype detection tool was developed, enabling automated code analysis and the generation of reports based on LLM. These reports summarize detected smells, assess their severity, and provide an overview of problem areas. A subjective evaluation, conducted through interviews, revealed that the reports were generally clear and useful, with most participants expressing a willingness to adopt the tool. However, some limitations emerged, including inconsistencies in severity levels and a lack of context-specific recommendations. These findings indicate that LLMs can evolve into reliable tools for automated software analysis, paving the way for smarter and developer-oriented solutions.
I code smell sono sintomi di scelte progettuali e implementative inadeguate che possono compromettere la manutenibilità, la leggibilità e la riusabilità del codice. Negli ultimi anni, l'individuazione dei code smell ha rappresentato un'attività fondamentale per garantire la qualità complessiva dei sistemi software. Gli strumenti tradizionali di analisi statica (SAT), pur essendo efficaci in una certa misura, spesso non riescono a individuare i difetti di progettazione più profondi, limitando la loro utilità in scenari reali. I recenti progressi nei Large Language Models (LLMs) hanno aperto nuove opportunità per sfruttare le tecniche di elaborazione del linguaggio naturale nell'analisi automatizzata del codice. Questa tesi studia l'uso dei LLMs per il rilevamento dei code smell, confrontando le loro prestazioni con quelle dei SATs. Per l'analisi sono stati utilizzati due datasets (MLCQ e DACOS) per valutare vari modelli e confrontare diverse strategie di prompting (zero-shot, few-shot e chain-of-thought), oltre a sperimentare il fine-tuning sui modelli leggeri. I risultati hanno suggerito che i LLMs possono integrare, e in alcuni casi sostituire, i SATs nell'individuazione dei code smell. Sulla base di questi risultati, è stato realizzato un prototipo che consente l’analisi automatizzata del codice e la generazione di report basati su LLM. Questi report riassumono i code smell rilevati, ne valutano la gravità e forniscono una visione d'insieme delle aree problematiche. Una valutazione soggettiva, condotta attraverso interviste, ha rivelato che i report sono generalmente chiari e utili, e la maggior parte dei partecipanti ha espresso la volontà di adottare lo strumento. Tuttavia, sono anche emerse alcune limitazioni, tra cui incongruenze nei livelli di gravità e la mancanza di raccomandazioni specifiche per il contesto. Questi risultati indicano che gli LLM possono evolversi in strumenti affidabili per l'analisi automatizzata del software, aprendo la strada a soluzioni più intelligenti e orientate agli sviluppatori.
Assessing LLM capabilities in code smell detection: an empirical study and a prototype tool
Bezzi, Alessio
2025/2026
Abstract
Code smells are symptoms of poor design and implementation choices in software that can undermine the maintanability, readability and reusability of the code. Over the past years, the detection of code smells represented a critical activity for ensuring the overall quality of software systems. Traditional Static Analysis Tools (SATs), while effective to some extent, often fail to capture deeper design flaws, limiting their usefulness in real-world scenarios. Recent advances in Large Language Models (LLMs) have opened new opportunities for leveraging natural language processing techniques in automated code analysis. This thesis investigates the use of LLMs for code smell detection, comparing their performance with that of SATs. Two public datasets (MLCQ and DACOS) were employed to evaluate various models and comparing different prompting strategies (zero-shot, few-shot, and chain-of-thought), as well as experimenting with fine-tuning on lightweight models. The results suggested that LLMs can complement, and in some cases replace, SATs in detecting code smells. Based on these results, a prototype detection tool was developed, enabling automated code analysis and the generation of reports based on LLM. These reports summarize detected smells, assess their severity, and provide an overview of problem areas. A subjective evaluation, conducted through interviews, revealed that the reports were generally clear and useful, with most participants expressing a willingness to adopt the tool. However, some limitations emerged, including inconsistencies in severity levels and a lack of context-specific recommendations. These findings indicate that LLMs can evolve into reliable tools for automated software analysis, paving the way for smarter and developer-oriented solutions.| File | Dimensione | Formato | |
|---|---|---|---|
|
Tesi_Magistrale___Code_Smells.pdf
non accessibile
Dimensione
2.45 MB
Formato
Adobe PDF
|
2.45 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/246219