Three-dimensional reconstruction from multiple images is a fundamental challenge in computer vision, essential for applications in robotics, photogrammetry, augmented reality, and virtual reality. Structure-from-Motion (SfM) addresses this challenge by jointly estimating camera poses and performing projective or metric 3D reconstruc- tion from input images. However, SfM is a complex problem susceptible to failures at multiple stages, particularly in uncontrolled, real-world environments where prior knowledge of camera parameters is unavailable and moving objects complicate the re- construction process. A critical component for successful metric SfM is camera autocalibration—the pro- cess of determining intrinsic camera parameters without external calibration objects or known scene geometry. Traditionally, autocalibration is addressed separately from met- ric reconstruction, typically as a preprocessing step where rough intrinsic parameters are estimated to bootstrap the reconstruction, followed by refinement through bundle adjustment or other local techniques. However, these approaches are highly sensitive to initial inaccuracies in intrinsic parameter estimates; errors in autocalibration can propa- gate through the reconstruction, leading to artifacts or even complete failure. Moreover, traditional autocalibration methods treat moving objects in dynamic scenes as outliers, disregarding the valuable multi-view constraints their motion relative to the camera provides. In this thesis, we introduce novel solvers that, for the first time, jointly solve cam- era autocalibration and metric 3D reconstruction in a single step. We develop a family of efficient solvers for pinhole cameras that directly recover intrinsic parameters from minimal image correspondences. Unlike conventional autocalibration and metric re- construction methods, our approach bypasses the need to solve Kruppa’s equations or perform projective-to-metric upgrades. When integrated into the COLMAP SfM software, our method demonstrates impressive accuracy and robustness, leading to im- proved reconstruction quality and an increased number of correctly registered 3D points compared to existing solutions. In multi-body scenes, where multiple objects move rigidly within the environment, traditional SfM methods fail to capture the moving objects, treating them as outliers. To address this limitation, we act on two fronts. First, we develop a multi-body autocalibration framework that revisits conventional Kruppa-based autocalibration and combines it with off-the-shelf motion segmentation. This allows us to utilize all avail- able epipolar constraints—including those induced by the rigidly moving objects—to estimate well-constrained camera intrinsics and segmented motions. This integrated approach serves as an ideal preprocessing step for downstream Multi-Body Structure- from-Motion (MBSfM) tasks, extending traditional SfM methods to scenes containing independently moving rigid objects. Second, we develop a zero-shot learning-based MBSfM framework that produces per-pixel depth estimates of multi-body scenes con- sistent across the entire image, as well as camera poses relative to the static background. By employing monocular depth estimators with learned priors, this approach addresses the relative scale ambiguity inherent in traditional MBSfM methods. Recognizing that our autocalibration methods do not address image distortion, we introduce a learning-based monocular approach for calibrating radially symmetric cam- eras and rectifying images with radial distortions. Supporting a wide range of cameras, including fisheye and 360-degree sensors, this method can be used as a preprocessing step before applying our autocalibration solvers.
La ricostruzione tridimensionale da immagini multiple è una sfida fondamentale nella visione artificiale, essenziale per applicazioni in robotica, fotogrammetria, realtà aumentata e realtà virtuale. La Structure-from-Motion (SfM) affronta questa sfida stimando congiuntamente le pose delle telecamere ed eseguendo una ricostruzione 3D proiettiva o metrica dalle immagini di input. Tuttavia, SfM è un problema complesso suscettibile a fallimenti in più fasi, particolarmente in ambienti reali non controllati dove la conoscenza preliminare dei parametri della telecamera non è disponibile e gli oggetti in movimento complicano il processo di ricostruzione. Un componente critico per una SfM metrica di successo è l'autocalibrazione della telecamera—il processo di determinazione dei parametri intrinseci della telecamera senza oggetti di calibrazione esterni o geometria nota della scena. Tradizionalmente, l'autocalibrazione viene affrontata separatamente dalla ricostruzione metrica, tipicamente come fase di pre-elaborazione dove vengono stimati parametri intrinseci approssimativi per avviare la ricostruzione, seguita da raffinamento attraverso bundle adjustment o altre tecniche locali. Tuttavia, questi approcci sono altamente sensibili alle imprecisioni iniziali nelle stime dei parametri intrinseci; gli errori nell'autocalibrazione possono propagarsi attraverso la ricostruzione, portando ad artefatti o persino al completo fallimento. Inoltre, i metodi tradizionali di autocalibrazione trattano gli oggetti in movimento nelle scene dinamiche come outlier, ignorando i preziosi vincoli multi-vista che il loro movimento relativo alla telecamera fornisce. In questa tesi, introduciamo nuovi solutori che, per la prima volta, risolvono congiuntamente l'autocalibrazione della telecamera e la ricostruzione 3D metrica in un unico passaggio. Sviluppiamo una famiglia di solutori efficienti per telecamere pinhole che recuperano direttamente i parametri intrinseci da corrispondenze minime di immagini. A differenza dei metodi convenzionali di autocalibrazione e ricostruzione metrica, il nostro approccio evita la necessità di risolvere le equazioni di Kruppa o eseguire aggiornamenti da proiettivo a metrico. Quando integrato nel software COLMAP SfM, il nostro metodo dimostra un'impressionante accuratezza e robustezza, portando a una migliore qualità della ricostruzione e un aumentato numero di punti 3D correttamente registrati rispetto alle soluzioni esistenti. Nelle scene multi-corpo, dove più oggetti si muovono rigidamente nell'ambiente, i metodi SfM tradizionali non riescono a catturare gli oggetti in movimento, trattandoli come outlier. Per affrontare questa limitazione, agiamo su due fronti. Primo, sviluppiamo un framework di autocalibrazione multi-corpo che rivisita l'autocalibrazione convenzionale basata su Kruppa e la combina con la segmentazione del movimento disponibile. Questo ci permette di utilizzare tutti i vincoli epipolari disponibili—inclusi quelli indotti dagli oggetti in movimento rigido—per stimare parametri intrinseci ben vincolati della telecamera e movimenti segmentati. Questo approccio integrato serve come fase ideale di pre-elaborazione per compiti di Multi-Body Structure-from-Motion (MBSfM), estendendo i metodi SfM tradizionali a scene contenenti oggetti rigidi in movimento indipendente. Secondo, sviluppiamo un framework MBSfM basato su zero-shot learning che produce stime di profondità per pixel di scene multi-corpo coerenti attraverso l'intera immagine, nonché pose della telecamera relative allo sfondo statico. Utilizzando stimatori di profondità monoculari con prior appresi, questo approccio affronta l'ambiguità di scala relativa intrinseca nei metodi MBSfM tradizionali. Riconoscendo che i nostri metodi di autocalibrazione non affrontano la distorsione dell'immagine, introduciamo un approccio monoculare basato sull'apprendimento per calibrare telecamere radialmente simmetriche e rettificare immagini con distorsioni radiali. Supportando un'ampia gamma di telecamere, inclusi sensori fisheye e a 360 gradi, questo metodo può essere utilizzato come fase di pre-elaborazione prima di applicare i nostri solutori di autocalibrazione.
Integrating geometric and learning-based methods for camera autocalibration and multi-body structure-from-motion
PORFIRI Dal CIN, ANDREA
2024/2025
Abstract
Three-dimensional reconstruction from multiple images is a fundamental challenge in computer vision, essential for applications in robotics, photogrammetry, augmented reality, and virtual reality. Structure-from-Motion (SfM) addresses this challenge by jointly estimating camera poses and performing projective or metric 3D reconstruc- tion from input images. However, SfM is a complex problem susceptible to failures at multiple stages, particularly in uncontrolled, real-world environments where prior knowledge of camera parameters is unavailable and moving objects complicate the re- construction process. A critical component for successful metric SfM is camera autocalibration—the pro- cess of determining intrinsic camera parameters without external calibration objects or known scene geometry. Traditionally, autocalibration is addressed separately from met- ric reconstruction, typically as a preprocessing step where rough intrinsic parameters are estimated to bootstrap the reconstruction, followed by refinement through bundle adjustment or other local techniques. However, these approaches are highly sensitive to initial inaccuracies in intrinsic parameter estimates; errors in autocalibration can propa- gate through the reconstruction, leading to artifacts or even complete failure. Moreover, traditional autocalibration methods treat moving objects in dynamic scenes as outliers, disregarding the valuable multi-view constraints their motion relative to the camera provides. In this thesis, we introduce novel solvers that, for the first time, jointly solve cam- era autocalibration and metric 3D reconstruction in a single step. We develop a family of efficient solvers for pinhole cameras that directly recover intrinsic parameters from minimal image correspondences. Unlike conventional autocalibration and metric re- construction methods, our approach bypasses the need to solve Kruppa’s equations or perform projective-to-metric upgrades. When integrated into the COLMAP SfM software, our method demonstrates impressive accuracy and robustness, leading to im- proved reconstruction quality and an increased number of correctly registered 3D points compared to existing solutions. In multi-body scenes, where multiple objects move rigidly within the environment, traditional SfM methods fail to capture the moving objects, treating them as outliers. To address this limitation, we act on two fronts. First, we develop a multi-body autocalibration framework that revisits conventional Kruppa-based autocalibration and combines it with off-the-shelf motion segmentation. This allows us to utilize all avail- able epipolar constraints—including those induced by the rigidly moving objects—to estimate well-constrained camera intrinsics and segmented motions. This integrated approach serves as an ideal preprocessing step for downstream Multi-Body Structure- from-Motion (MBSfM) tasks, extending traditional SfM methods to scenes containing independently moving rigid objects. Second, we develop a zero-shot learning-based MBSfM framework that produces per-pixel depth estimates of multi-body scenes con- sistent across the entire image, as well as camera poses relative to the static background. By employing monocular depth estimators with learned priors, this approach addresses the relative scale ambiguity inherent in traditional MBSfM methods. Recognizing that our autocalibration methods do not address image distortion, we introduce a learning-based monocular approach for calibrating radially symmetric cam- eras and rectifying images with radial distortions. Supporting a wide range of cameras, including fisheye and 360-degree sensors, this method can be used as a preprocessing step before applying our autocalibration solvers.File | Dimensione | Formato | |
---|---|---|---|
rev0000.pdf
accessibile in internet per tutti
Dimensione
32.15 MB
Formato
Adobe PDF
|
32.15 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/236872