Estimating the rigid transformation of objects relative to a camera, commonly known as 6D pose estimation, is a critical yet challenging task in computer vision. In this work we address the limitations of current methods in the context of unseen-object 6D pose estimation, where models are required to estimate the pose of objects which they have not seen during training. We pursue a detailed qualitative and quantitative analysis of state-of-the-art methods where we identify key failure points related to current zero-shot segmentors and render-and-compare pose estimators. To overcome these challenges, we introduce a novel Collision-Aware pipeline that integrates spatial information and en- forces scene-level consistency to improve pose estimation accuracy. Our method leverages volumetric collision measurements and alignment scores to quantify and maintain spatial coherence among detected instances. By reconstructing a coherent scene configuration that adheres to both geometric relationships and physical constraints, our approach effec- tively reduces ambiguities in pose estimation, particularly for heavily occluded objects. To our knowledge, we propose the first multi-object multi-instance 6D pose estimation pipeline that exploits information from the reconstructed scene. Experimental results outperform current state-of-the-art approaches, laying the groundwork for future research aimed at developing more robust and reliable 6D pose estimation systems.
Stimare l’isometria degli oggetti in un immagine rispetto alla posizione della telecam- era, comunemente noto come "6D pose estimation", rappresenta un compito cruciale ma estremamente impegnativo della Computer Vision. In questo lavoro affrontiamo le limi- tazioni dei metodi attuali nel contesto di unseen-object 6D pose estimation, dove i modelli devono stimare la posa di oggetti che non hanno mai visto durante la fase di addestra- mento. Abbiamo condotto un’analisi dettagliata, sia qualitativa che quantitativa, dei metodi attuali, identificando i principali punti di fallimento relativi ai segmentatori zero- shot e agli stimatori di posa basati su tecniche di "render-and-compare". Per superare queste sfide, introduciamo una nuova pipeline “Collision-Aware” che integra informazioni spaziali e impone una coerenza a livello di scena, al fine di migliorare l’accuratezza della stima della posa. Il nostro metodo sfrutta misurazioni volumetriche delle collisioni e punteggi di allineamento per quantificare e mantenere la coerenza spaziale tra le istanze rilevate. Ricostruendo una configurazione di scena coerente, che rispetti sia le relazioni geometriche che i vincoli fisici, il nostro approccio riduce efficacemente le ambiguità nella stima della posa, in particolare per oggetti fortemente occlusi. Per quanto di nostra conoscenza, proponiamo la prima pipeline di 6D pose estimation multi-oggetto e multi- istanza che sfrutta le informazioni provenienti dalla scena ricostruita. I risultati speri- mentali superano lo stato dell’arte, ponendo le basi per future ricerche volte a sviluppare sistemi più robusti e affidabili.
Understanding the limitations of current 6D pose estimation methods
Iacoella, Angelo
2024/2025
Abstract
Estimating the rigid transformation of objects relative to a camera, commonly known as 6D pose estimation, is a critical yet challenging task in computer vision. In this work we address the limitations of current methods in the context of unseen-object 6D pose estimation, where models are required to estimate the pose of objects which they have not seen during training. We pursue a detailed qualitative and quantitative analysis of state-of-the-art methods where we identify key failure points related to current zero-shot segmentors and render-and-compare pose estimators. To overcome these challenges, we introduce a novel Collision-Aware pipeline that integrates spatial information and en- forces scene-level consistency to improve pose estimation accuracy. Our method leverages volumetric collision measurements and alignment scores to quantify and maintain spatial coherence among detected instances. By reconstructing a coherent scene configuration that adheres to both geometric relationships and physical constraints, our approach effec- tively reduces ambiguities in pose estimation, particularly for heavily occluded objects. To our knowledge, we propose the first multi-object multi-instance 6D pose estimation pipeline that exploits information from the reconstructed scene. Experimental results outperform current state-of-the-art approaches, laying the groundwork for future research aimed at developing more robust and reliable 6D pose estimation systems.File | Dimensione | Formato | |
---|---|---|---|
Tesi_Completo.pdf
accessibile in internet per tutti
Descrizione: Tesi
Dimensione
11.92 MB
Formato
Adobe PDF
|
11.92 MB | Adobe PDF | Visualizza/Apri |
Executive_Summary_Completo.pdf
accessibile in internet per tutti
Descrizione: Executive Summary
Dimensione
1.13 MB
Formato
Adobe PDF
|
1.13 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/235046