Recommender Systems (RSs) are essential to digital platforms but remain largely opaque, limiting users’ ability to understand or influence the recommendations they receive. This work investigates how Large Language Models (LLMs) can enhance RSs not only in accuracy but also in transparency and steerability, allowing users to inspect and modify the representation of their preferences. We explore two complementary integration paradigms. LLM4Embedding incorporates pre-trained LLMs in the feature representation phase: user histories are summarized in natural language, converted into vector embeddings, and injected into collaborative filtering models (i.e. IALS). LLM4ReRank, instead, positions the LLM at the end of the pipeline of a traditional recommender : given candidate recommendations from a baseline model, the LLM generates a user summary and reranks the candidates accordingly, fine-tuned via Low-Rank Adaptation and Group Relative Policy Optimization. Experiments were conducted on MovieLens 100k and Goodbooks-10k, enriched with external textual metadata. Evaluation covered both ranking accuracy (NDCG@10) and beyond-accuracy metrics, diversity, coverage, and novelty, against traditional baselines such as IALS, SLIM, ItemKNN, and UserKNN. Results show that LLM4Embedding, particularly in its blended variant combining user summary embeddings and collaborative factors, maintains competitive accuracy while improving transparency and user controllability. LLM4ReRank achieves higher interpretability at the cost of greater computational demand and slightly lower accuracy. Overall, the study demonstrates that lightweight, modular LLM integrations can make recommender systems more interpretable and steerable without sacrificing practical efficiency.
I Recommender Systems (RSs) sono essenziali per le piattaforme digitali ma rimangono in gran parte opachi, limitando la capacità degli utenti di comprendere o influenzare le raccomandazioni che ricevono. Questo lavoro analizza come i Large Language Models (LLMs) possano migliorare gli RSs non solo in termini di accuracy, ma anche di trasparenza e controllabilità, permettendo agli utenti di ispezionare e modificare la rappresentazione delle loro preferenze. Esploriamo due paradigmi di integrazione complementari. LLM4Embedding incorpora LLM pre-addestrati nella fase di "feature representation": le interazioni passate degli utenti vengono riassunte in linguaggio naturale, convertite in embedding e iniettate in modelli di collaborative filtering (ad esempio IALS). LLM4ReRank, invece, posiziona l’LLM nella fase della pipeline di un recommender tradizionale: date le raccomandazioni candidate da un modello baseline, l’LLM genera un riassunto dei gusti dell’utente ed effettua un riordinamento dei candidati di conseguenza, attraverso un fine-tuning tramite Low-Rank Adaptation e Group Relative Policy Optimization. Gli esperimenti sono stati condotti sui dataset MovieLens 100k e Goodbooks-10k, arricchiti con metadata testuali esterni. La valutazione ha coperto sia l’accuracy di ranking (NDCG@10) sia metriche beyond-accuracy, come diversità, copertura e novità, confrontandole con baseline standard come IALS, SLIM, ItemKNN e UserKNN. I risultati mostrano che LLM4Embedding, in particolare nella sua variante "blended" che combina gli embedding dei riassunto utente e i fattori collaborativi, mantiene un’accuracy competitiva migliorando al contempo la trasparenza e la controllabilità da parte dell’utente. LLM4ReRank ottiene una maggiore interpretabilità al costo di maggiori risorse computazionali. Nel complesso, lo studio dimostra che integrazioni di LLM di piccole dimensioni e modulari possono rendere i recommender systems più interpretabili e controllabili senza sacrificare l’efficienza pratica.
Using LLMs to enhance recommender systems by introducing transparency and steerability
Pagano, Luca;VITELLO, GIUSEPPE
2025/2026
Abstract
Recommender Systems (RSs) are essential to digital platforms but remain largely opaque, limiting users’ ability to understand or influence the recommendations they receive. This work investigates how Large Language Models (LLMs) can enhance RSs not only in accuracy but also in transparency and steerability, allowing users to inspect and modify the representation of their preferences. We explore two complementary integration paradigms. LLM4Embedding incorporates pre-trained LLMs in the feature representation phase: user histories are summarized in natural language, converted into vector embeddings, and injected into collaborative filtering models (i.e. IALS). LLM4ReRank, instead, positions the LLM at the end of the pipeline of a traditional recommender : given candidate recommendations from a baseline model, the LLM generates a user summary and reranks the candidates accordingly, fine-tuned via Low-Rank Adaptation and Group Relative Policy Optimization. Experiments were conducted on MovieLens 100k and Goodbooks-10k, enriched with external textual metadata. Evaluation covered both ranking accuracy (NDCG@10) and beyond-accuracy metrics, diversity, coverage, and novelty, against traditional baselines such as IALS, SLIM, ItemKNN, and UserKNN. Results show that LLM4Embedding, particularly in its blended variant combining user summary embeddings and collaborative factors, maintains competitive accuracy while improving transparency and user controllability. LLM4ReRank achieves higher interpretability at the cost of greater computational demand and slightly lower accuracy. Overall, the study demonstrates that lightweight, modular LLM integrations can make recommender systems more interpretable and steerable without sacrificing practical efficiency.| File | Dimensione | Formato | |
|---|---|---|---|
|
2025_12_Pagano_Vitello_Tesi_01.pdf
accessibile in internet per tutti
Descrizione: Tesi formato articolo
Dimensione
1.19 MB
Formato
Adobe PDF
|
1.19 MB | Adobe PDF | Visualizza/Apri |
|
2025_12_Pagano_Vitello_Executive_Summary_02.pdf
accessibile in internet per tutti
Descrizione: Executive Summary
Dimensione
512.51 kB
Formato
Adobe PDF
|
512.51 kB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/247411