In recent years, the rapid evolution of large-scale self-supervised learning has reshaped image understanding, enabling computer vision models to adapt with minimal supervision to novel tasks. Few-Shot Segmentation (FSS) specifically benefits from these advances, as it addresses the challenge of reliably segmenting new object classes given only a few annotated examples. This work investigates how Foundation Models, such as the Segment Anything Model, CLIP, and DINOv2-pretrained Vision Transformers, can be leveraged to tackle the FSS problem. Initially, a purely vision-based pipeline is analyzed, demonstrating that even robust backbone networks can struggle to extract sufficiently rich features in challenging scenarios, such as occlusions, object cut-offs, or significant intra-class variability. To mitigate these issues, an additional text-based system is introduced, processing segmentation proposals by leveraging class names and textual descriptions for improved semantic understanding. Building on both methods, a new pipeline, called MARS (Multimodal Alignment and Ranking System), is proposed. MARS addresses the aforementioned challenges by integrating four complementary scoring metrics: local and global visual measures from visual feature correspondences, and local and global conceptual cues derived from textual prompts. Concretely, MARS retrieves the target class name and definition of the object of interest via ViP-LLaVA and WordNet, refines local conceptual information with CLIP, evaluates broad semantic coherence through an AlphaCLIP module, and incorporates a DINOv2-based branch for both global and fine-grained visual matching. These components are consolidated into a unified ranking, and merging framework for mask proposals. Experiments on benchmark datasets such as COCO-20i, Pascal-5i, LVIS-92i, and FSS-1000 show that MARS effectively mitigates visual discrepancies and enhances segmentation performance.
Negli ultimi anni, recenti sviluppi del self-supervised learning hanno rivoluzionato la comprensione delle immagini, consentendo ai modelli di visione artificiale di adattarsi a nuovi compiti attraverso supervisione minima. La Segmentazione Few-Shot (FSS) beneficia di questi progressi, poiché si concentra sulla segmentazione di nuove classi di oggetti utilizzando solo pochi esempi annotati. In questo lavoro viene approfondito come i modelli di fondazione, come il Segment Anything Model, CLIP e i Vision Transformers pre-addestrati con DINOv2, possano essere utilizzati per affrontare il problema di FSS. Inizialmente, viene analizzata una pipeline basata soltanto su informazioni visive, evidenziando come reti backbone robuste hanno difficoltà nell’estrarre features sufficientemente informative per fronteggiare situazioni complesse, come in caso di occlusioni, di oggetti parzialmente visibili o notevole variabilità intra-classe. Per superare tali problematiche, si introduce un sistema testuale che elabora le proposte di segmentazione sfruttando nomi di classe e descrizioni testuali per fornire una migliore comprensione semantica. Integrando i due approcci, si propone una nuova pipeline chiamata MARS (Sistema di Allinemanto e Ordinamento Multimodale). MARS affronta le problematiche già evidenziate integrando quattro metriche di valutazione complementari: misure visive sia locali che globali, basate sulle corrispondenze tra caratteristiche visuali delle immagini, e rappresentazioni concettuali testuali sia locali che globali. MARS recupera il nome e la definizione della classe di interesse tramite ViP-LLaVA e WordNet, estrae l’informazione concettuale locale con CLIP, valuta la coerenza semantica globale attraverso un modulo AlphaCLIP e adotta DINOv2 per l’allineamento visivo a livello sia globale sia di dettaglio. Tali componenti sono quindi riuniti in un unico framework di ranking e fusione delle maschere. Esperimenti su dataset di benchmark come COCO-20i, Pascal-5i, LVIS-92i, e FSS-1000 mostrano come MARS mitiga in modo efficace problematiche dovute a discrepanze visive, dunque migliorando le performance di segmentazione.
Visual foundation models for few-shot segmentation
Pertino, Paolo
2023/2024
Abstract
In recent years, the rapid evolution of large-scale self-supervised learning has reshaped image understanding, enabling computer vision models to adapt with minimal supervision to novel tasks. Few-Shot Segmentation (FSS) specifically benefits from these advances, as it addresses the challenge of reliably segmenting new object classes given only a few annotated examples. This work investigates how Foundation Models, such as the Segment Anything Model, CLIP, and DINOv2-pretrained Vision Transformers, can be leveraged to tackle the FSS problem. Initially, a purely vision-based pipeline is analyzed, demonstrating that even robust backbone networks can struggle to extract sufficiently rich features in challenging scenarios, such as occlusions, object cut-offs, or significant intra-class variability. To mitigate these issues, an additional text-based system is introduced, processing segmentation proposals by leveraging class names and textual descriptions for improved semantic understanding. Building on both methods, a new pipeline, called MARS (Multimodal Alignment and Ranking System), is proposed. MARS addresses the aforementioned challenges by integrating four complementary scoring metrics: local and global visual measures from visual feature correspondences, and local and global conceptual cues derived from textual prompts. Concretely, MARS retrieves the target class name and definition of the object of interest via ViP-LLaVA and WordNet, refines local conceptual information with CLIP, evaluates broad semantic coherence through an AlphaCLIP module, and incorporates a DINOv2-based branch for both global and fine-grained visual matching. These components are consolidated into a unified ranking, and merging framework for mask proposals. Experiments on benchmark datasets such as COCO-20i, Pascal-5i, LVIS-92i, and FSS-1000 show that MARS effectively mitigates visual discrepancies and enhances segmentation performance.File | Dimensione | Formato | |
---|---|---|---|
Pertino_04_2025_Visual_Foundation_Models_For_FSS.pdf
accessibile in internet per tutti
Descrizione: Tesi Pertino Paolo - Visual Foundation Models for Few-Shot Segmentation - Documento Principale
Dimensione
50.64 MB
Formato
Adobe PDF
|
50.64 MB | Adobe PDF | Visualizza/Apri |
Pertino_04_2025_Visual_Foundation_Models_For_FSS_Executive_Summary.pdf
accessibile in internet per tutti
Descrizione: Tesi Pertino Paolo - Visual Foundation Models for Few-Shot Segmentation - Executive Summary
Dimensione
3.16 MB
Formato
Adobe PDF
|
3.16 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/235123