In recent years, the field of machine learning has witnessed significant advances driven by large-scale data and high-capacity models. However, the standard supervised learning paradigmwheremodelsaretrainedonmassiveannotateddatasetstailoredtoeachspecific task faces serious limitations in terms of scalability, adaptability, and data efficiency. These issues become particularly acute in domains where labeled data is scarce, costly to obtain, or rapidly evolving. In response, few-shot learning (FSL) has emerged as a promising research direction, aiming to equip models with the ability to learn new concepts from only a handful of labeled examples. Yet, despite notable progress, few-shot learning remains challenged by issues such as distribution shifts, class imbalance, and the rigidity of conventional inference procedures. This thesis addresses these limitations by focusing on transductive few-shot learning, a paradigm in which access to the entire unlabeled test set is granted at inference time, allowing the model to adapt its predictions based on the structure and distribution of the test data itself. Within this setting, we introduce UNEM (UNrolled Expectation- Maximization), a novel and flexible inference framework that unrolls a generalized EM algorithm into a differentiable neural architecture. Unlike existing methods that require an extensive grid search to manually tune hyperparameters such as class balance priors or entropy regularizers, UNEM learns these parameters directly from the data, making the inference process adaptive, efficient, and robust. At the core of UNEM is a modular and interpretable architecture that treats each iteration of the EM algorithm as a layer in a neural network. The framework supports multiple probabilistic models, including Gaussian mixtures for vision-only representations and Dirichlet distributions for vision- language models such as CLIP. We validate UNEM on a diverse set of benchmarks covering fine-grained image classifi- cation, natural images, and multimodal datasets. Our experiments show that UNEM achieves state-of-the-art performance, outperforming both inductive and transductive baselines. In particular, we report accuracy gains of up to 10% in vision-only tasks and up to 7.5% in vision-language tasks, especially in challenging low-shot or class-imbalanced scenarios. The work "UNEM: UNrolled Generalized EM for Transductive Few-Shot Learning" has been published at CVPR 2025.
Negli ultimi anni, il campo dell’apprendimento automatico ha registrato significativi pro- gressi, guidati dalla disponibilità di grandi quantità di dati e modelli ad alta capacità. Tuttavia, il paradigma supervisionato tradizionale, in cui i modelli vengono addestrati su dataset annotati su larga scala e specifici per ogni compito, presenta notevoli limiti in termini di scalabilità, adattabilità ed efficienza nell’uso dei dati. Queste problematiche diventano particolarmente critiche in domini dove i dati etichettati sono scarsi, costosi da ottenere o soggetti a rapidi cambiamenti. In risposta a tali sfide, il Few-Shot Learning (FSL) è emerso come un promettente ambito di ricerca, con l’obiettivo di permettere ai modelli di apprendere nuovi concetti a partire da pochissimi esempi etichettati. Nonos- tante i progressi raggiunti, il few-shot learning continua a scontrarsi con difficoltà quali lo shift di distribuzione, lo sbilanciamento tra classi e l’eccessiva rigidità delle procedure inferenziali convenzionali. Questa tesi affronta tali limitazioni concentrandosi sul paradigma del transductive few- shot learning, in cui l’intero insieme di test non etichettato è disponibile al momento dell’inferenza. Questa assunzione permette al modello di adattare le proprie previsioni sulla base della struttura e distribuzione dei dati di test. In tale contesto, proponi- amo UNEM (UNrolled Expectation-Maximization), un framework inferenziale innovativo e flessibile che scompone un algoritmo EM generalizzato in un’architettura neurale dif- ferenziabile. A differenza dei metodi esistenti che richiedono lunghe ricerche a griglia per ottimizzare manualmente iperparametri come il bilanciamento delle classi o la rego- larizzazione entropica, UNEM apprende tali parametri direttamente dai dati, rendendo il processoinferenzialepiùadattivo, efficienteerobusto. IlcuorediUNEMèun’architettura modulare e interpretabile, in cui ogni iterazione dell’algoritmo EM viene trattata come uno strato neurale. Il framework supporta diversi modelli probabilistici, tra cui dis- tribuzioni Gaussiane per rappresentazioni visive e distribuzioni di Dirichlet per modelli visione-linguaggio come CLIP. Abbiamo validato UNEM su un’ampia varietà di benchmark, che comprendono dataset di classificazione fine-grained, immagini naturali e scenari multimodali. I risultati sper- imentali dimostrano che UNEM raggiunge prestazioni allo stato dell’arte, superando sia i metodi induttivi che quelli transduttivi. In particolare, riportiamo incrementi di ac- curatezza fino al 10% nei task vision-only e fino al 7.5% nei task vision-language, con miglioramenti più evidenti in condizioni di pochi dati e classi sbilanciate. Il lavoro intitolato "UNEM: UNrolled Generalized EM for Transductive Few-Shot Learn- ing" è stato pubblicato in occasione della conferenza CVPR 2025.
UNEM: Unrolled Generalized EM for Transductive Few Shot Learning
Zhou, Long
2024/2025
Abstract
In recent years, the field of machine learning has witnessed significant advances driven by large-scale data and high-capacity models. However, the standard supervised learning paradigmwheremodelsaretrainedonmassiveannotateddatasetstailoredtoeachspecific task faces serious limitations in terms of scalability, adaptability, and data efficiency. These issues become particularly acute in domains where labeled data is scarce, costly to obtain, or rapidly evolving. In response, few-shot learning (FSL) has emerged as a promising research direction, aiming to equip models with the ability to learn new concepts from only a handful of labeled examples. Yet, despite notable progress, few-shot learning remains challenged by issues such as distribution shifts, class imbalance, and the rigidity of conventional inference procedures. This thesis addresses these limitations by focusing on transductive few-shot learning, a paradigm in which access to the entire unlabeled test set is granted at inference time, allowing the model to adapt its predictions based on the structure and distribution of the test data itself. Within this setting, we introduce UNEM (UNrolled Expectation- Maximization), a novel and flexible inference framework that unrolls a generalized EM algorithm into a differentiable neural architecture. Unlike existing methods that require an extensive grid search to manually tune hyperparameters such as class balance priors or entropy regularizers, UNEM learns these parameters directly from the data, making the inference process adaptive, efficient, and robust. At the core of UNEM is a modular and interpretable architecture that treats each iteration of the EM algorithm as a layer in a neural network. The framework supports multiple probabilistic models, including Gaussian mixtures for vision-only representations and Dirichlet distributions for vision- language models such as CLIP. We validate UNEM on a diverse set of benchmarks covering fine-grained image classifi- cation, natural images, and multimodal datasets. Our experiments show that UNEM achieves state-of-the-art performance, outperforming both inductive and transductive baselines. In particular, we report accuracy gains of up to 10% in vision-only tasks and up to 7.5% in vision-language tasks, especially in challenging low-shot or class-imbalanced scenarios. The work "UNEM: UNrolled Generalized EM for Transductive Few-Shot Learning" has been published at CVPR 2025.File | Dimensione | Formato | |
---|---|---|---|
2025_07_Zhou_Tesi_01.pdf
accessibile in internet per tutti
Descrizione: Tesi
Dimensione
5.71 MB
Formato
Adobe PDF
|
5.71 MB | Adobe PDF | Visualizza/Apri |
2025_07_Zhou_Executive Summary_02.pdf
accessibile in internet per tutti
Descrizione: Executive Summary
Dimensione
925.12 kB
Formato
Adobe PDF
|
925.12 kB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/240302