Finding fine-grained categories in coarse-grained data Supervision is a new and difficult job in the world of machine learning. It tries to find fine-grained categories on its own using only coarse-grained labels, without needing fine-grained annotations. This thesis suggests a complete hierarchical framework that combines Hierarchical Self-Contrastive Learning, Prototypical Contrastive Learning, and a Decoupled Prototypical Network to solve this problem. This architecture with many parts is made to get the best fine-grained representations while still being consistent with coarse-grained supervision. The main contribution is a hierarchical weighted self-contrastive network that was made with RoBERTa embeddings and is running in PyTorch. The model uses supervised learning on the shallow layers and weighted self-contrastive learning on the deeper layers. This makes the distances between classes and within classes line up perfectly. This hierarchical method lets the model learn more and more detailed information from data that has been roughly labeled. The suggested framework uses an Expectation-Maximization-based prototype optimization strategy and a momentum contrastive loss that is updated using Exponential Moving Average (EMA). This makes prototype learning more stable and reliable, reduces prototype drift, and improves the semantic alignment of discovered categories. A lot of tests on public text classification datasets like Amazon Reviews, Stack Overflow, and Banking77 show that the proposed framework finds well-structured fine-grained categories much better than traditional clustering methods and baseline Transformer models like BERT. With RoBERTa as the backbone, the FlipClass model reached an average accuracy of \textbf{75.2\%} across the three datasets, surpassing the second-best method μGCD (\textbf{70.5\%}) by \textbf{4.7\%}. It also achieved improvements of \textbf{6.3\%} on "old" classes and \textbf{4.8\%} on "new" classes. These outcomes confirm the effectiveness of the proposed method for open-world fine-grained category discovery. In addition, this thesis conducts in-depth analysis of clustering behavior, error patterns, and training dynamics, revealing how the model architecture impacts the learning process and performance in category discovery. By combining instance-level discrimination with prototype-based clustering, the proposed model achieves a balanced optimization between classification accuracy and fine-grained discovery. The incorporation of von Mises-Fisher clustering further stabilizes the assignment of embeddings to category prototypes, facilitating the gradual emergence of fine-grained structure in the learned feature space. Experimental evidence shows that RoBERTa embeddings significantly improve inter-class separability, reduce misclassifications, and form more stable cluster structures during training compared to BERT. The findings of this study indicate that hierarchical self-contrastive learning holds great potential for reducing annotation costs and improving clustering quality, with broad applicability in fields such as e-commerce, biomedical informatics, and cybersecurity.
La scoperta di categorie fine-grained sotto supervisione coarse-grained rappresenta un compito innovativo e complesso nel campo del machine learning. L’obiettivo è identificare automaticamente categorie semantiche dettagliate utilizzando unicamente etichette coarse-grained, senza fare affidamento su annotazioni fine-grained esplicite. Per affrontare questa sfida, la presente tesi propone un framework gerarchico completo che integra l’apprendimento auto-contrastivo gerarchico, l’apprendimento contrastivo prototipico e una rete prototipica decentrata (Decoupled Prototypical Network). Questa architettura multi-componente è specificamente progettata per ottimizzare le rappresentazioni fine-grained, mantenendo la coerenza con la supervisione coarse-grained. Il contributo principale consiste in una rete auto-contrastiva gerarchica pesata, sviluppata utilizzando embedding RoBERTa e implementata in PyTorch. Il modello applica apprendimento supervisionato nei livelli superficiali e auto-apprendimento contrastivo pesato nei livelli più profondi, allineando efficacemente le distanze tra e all’interno delle classi. Questo approccio gerarchico consente al modello di apprendere progressivamente conoscenze fine-grained a partire da dati etichettati solo in modo grossolano. Il framework adotta inoltre una strategia di ottimizzazione dei prototipi basata su Expectation-Maximization e incorpora una funzione di perdita contrastiva con momentum aggiornata tramite media mobile esponenziale (EMA), garantendo stabilità e affidabilità nell’apprendimento dei prototipi, riducendo la deriva dei prototipi stessi e migliorando l’allineamento semantico delle categorie scoperte. Esperimenti approfonditi condotti su diversi dataset pubblici di classificazione testuale (Amazon Reviews, Stack Overflow e Banking77) dimostrano che il framework proposto supera significativamente i metodi di clustering tradizionali e i modelli Transformer di base, incluso BERT, nella scoperta di categorie fine-grained ben strutturate. Su tutte le metriche di valutazioneaccuratezza, Adjusted Rand Index (ARI) e Normalized Mutual Information (NMI) il modello ha ottenuto risultati superiori. Utilizzando RoBERTa come backbone, il modello \textit{FlipClass} ha raggiunto un’accuratezza media del \textbf{75.2\%} sui tre dataset, superando il secondo miglior metodo, μGCD (\textbf{70.5\%}), di \textbf{4.7\%}. Ha inoltre ottenuto miglioramenti del \textbf{6.3\%} sulle classi “note” e del \textbf{4.8\%} su quelle “nuove”. Questi risultati confermano l’efficacia del metodo proposto per la scoperta di categorie fine-grained in scenari open-world. Inoltre, questa tesi presenta un’analisi dettagliata del comportamento del clustering, dei pattern di errore e delle dinamiche di addestramento, mostrando come l’architettura del modello influisca sul processo di apprendimento e sulle prestazioni nella scoperta delle categorie. Combinando la discriminazione a livello di istanza con il clustering basato su prototipi, il modello proposto ottiene un bilanciamento ottimale tra accuratezza di classificazione e scoperta fine-grained. L’integrazione del clustering von Mises-Fisher stabilizza ulteriormente l’assegnazione degli embedding ai prototipi di categoria, facilitando l’emergere progressivo di strutture fine-grained nello spazio delle feature apprese. Le evidenze sperimentali mostrano che gli embedding RoBERTa migliorano significativamente la separabilità tra classi, riducono le misclassificazioni e producono strutture di cluster più stabili durante l’addestramento rispetto a BERT. I risultati di questo studio indicano che l’apprendimento auto-contrastivo gerarchico ha un grande potenziale per ridurre i costi di annotazione e migliorare la qualità del clustering, con ampie applicazioni in ambiti come l’e-commerce, la bioinformatica e la cybersicurezza.
Hierarchical prototypical learning for generalized category discovery
Biasiolo, Lorenzo
2024/2025
Abstract
Finding fine-grained categories in coarse-grained data Supervision is a new and difficult job in the world of machine learning. It tries to find fine-grained categories on its own using only coarse-grained labels, without needing fine-grained annotations. This thesis suggests a complete hierarchical framework that combines Hierarchical Self-Contrastive Learning, Prototypical Contrastive Learning, and a Decoupled Prototypical Network to solve this problem. This architecture with many parts is made to get the best fine-grained representations while still being consistent with coarse-grained supervision. The main contribution is a hierarchical weighted self-contrastive network that was made with RoBERTa embeddings and is running in PyTorch. The model uses supervised learning on the shallow layers and weighted self-contrastive learning on the deeper layers. This makes the distances between classes and within classes line up perfectly. This hierarchical method lets the model learn more and more detailed information from data that has been roughly labeled. The suggested framework uses an Expectation-Maximization-based prototype optimization strategy and a momentum contrastive loss that is updated using Exponential Moving Average (EMA). This makes prototype learning more stable and reliable, reduces prototype drift, and improves the semantic alignment of discovered categories. A lot of tests on public text classification datasets like Amazon Reviews, Stack Overflow, and Banking77 show that the proposed framework finds well-structured fine-grained categories much better than traditional clustering methods and baseline Transformer models like BERT. With RoBERTa as the backbone, the FlipClass model reached an average accuracy of \textbf{75.2\%} across the three datasets, surpassing the second-best method μGCD (\textbf{70.5\%}) by \textbf{4.7\%}. It also achieved improvements of \textbf{6.3\%} on "old" classes and \textbf{4.8\%} on "new" classes. These outcomes confirm the effectiveness of the proposed method for open-world fine-grained category discovery. In addition, this thesis conducts in-depth analysis of clustering behavior, error patterns, and training dynamics, revealing how the model architecture impacts the learning process and performance in category discovery. By combining instance-level discrimination with prototype-based clustering, the proposed model achieves a balanced optimization between classification accuracy and fine-grained discovery. The incorporation of von Mises-Fisher clustering further stabilizes the assignment of embeddings to category prototypes, facilitating the gradual emergence of fine-grained structure in the learned feature space. Experimental evidence shows that RoBERTa embeddings significantly improve inter-class separability, reduce misclassifications, and form more stable cluster structures during training compared to BERT. The findings of this study indicate that hierarchical self-contrastive learning holds great potential for reducing annotation costs and improving clustering quality, with broad applicability in fields such as e-commerce, biomedical informatics, and cybersecurity.File | Dimensione | Formato | |
---|---|---|---|
2025_07_Biasiolo_Thesis.pdf
accessibile in internet per tutti
Descrizione: This is my Thesis
Dimensione
9.61 MB
Formato
Adobe PDF
|
9.61 MB | Adobe PDF | Visualizza/Apri |
2025_07_Biasiolo_Executive_Summary.pdf
accessibile in internet per tutti
Descrizione: This is the Executive Summary
Dimensione
496.37 kB
Formato
Adobe PDF
|
496.37 kB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/239738