The effectiveness of computer vision methodologies is substantially dependent on the availability of extensive annotated datasets, yet the annotation process remains a critical bottleneck. This challenge is particularly acute in the field of autonomous driving, where vast quantities of unlabeled data can be readily acquired from vehicle fleets, and 3D LiDAR point clouds provide valuable geometric information. Addressing this issue, a fully unsupervised, reproducible, and general method is introduced to generate pseudo-labels for three distinct downstream tasks: semantic segmentation, object detection, and depth estimation. This method operates at scale with minimal parameter tuning by leveraging accumulated LiDAR maps, raw images and IMU measurements to automatically annotate LiDAR scans. A novel approach for static and dynamic object separation is devised, which exploits the temporal accumulation of points and labels alongside a novel Iterative Weighted Update Function to effectively distinguish between static backgrounds and dynamic, moving objects. The resulting semantic pseudo-labels and bounding boxes are validated by demonstrating their capability to replace manual annotations, achieving performance levels that approach Oracle benchmarks. Furthermore, the utilization of pseudo-labeled point clouds in training depth estimation networks leads to an average improvement of 42.41% in Mean Absolute Error (MAE) within the 0-80 meters range, 51.5% within the 80-150 meters range and 22.0% within the 150-250 meters range. Additionally, near-Oracle performance is attained in both object detection and semantic segmentation tasks. These findings underscore the significant potential of the proposed method in enhancing downstream computer vision applications through scalable and automated data annotation, thereby mitigating the reliance on labor-intensive manual annotation processes.
L’efficacia dei recenti modelli di computer vision dipende fondamentalmente dalla disponibilità di ampi dataset annotati, tuttavia il processo di annotazione rimane un punto critico. Questo blocco è particolarmente evidente nel campo della guida autonoma, dove grandi quantità di dati non annotati possono essere facilmente raccolte da flotte di veicoli in campagne di raccolta dati, acquisendo informazioni geometriche preziose attraverso sensori 3D come i LiDAR. Per affrontare questo problema, in questa tesi viene introdotto un metodo completamente non supervisionato, riproducibile e generale per generare pseudo-labels per tre distinte task di 3D perception: semantic segmentation, object detection e depth estimation. Questo metodo opera su larga scala con la minima necessità di regolazione dei parametri, sfruttando mappe LiDAR accumulate, immagini e misurazioni inerziali di IMU per annotare automaticamente scansioni LiDAR. Viene ideato un approccio innovativo per la separazione di oggetti statici e dinamici, che sfrutta l’accumulazione temporale di punti e labels insieme a una Iterative Weighted Update Function per distinguere efficacemente tra sfondi statici e oggetti dinamici in movimento. Le pseudo-labels semantiche e le bounding boxes risultanti vengono validate dimostrando la loro capacità di sostituire le annotazioni manuali, raggiungendo livelli di prestazione prossimi ai benchmark Oracle. Inoltre, l’utilizzo di dense pointcloud pseudo-annotate nell’addestramento di reti neurali di depth estimation porta a un miglioramento medio del 42.41% in Errore Assoluto Medio (MAE) nell’intervallo tra 0 e 80 metri, 51,5% nell’intervallo tra 80 e 150 metri e del 22,0% tra 150 e 250 metri. Inoltre, vengono raggiunte prestazioni quasi Oracle sia in object detection che in semantic segmentation. Questi risultati evidenziano il significativo potenziale del metodo proposto nel potenziare le applicazioni di computer vision attraverso un’annotazione dei dati scalabile e automatizzata, riducendo così la dipendenza da processi di annotazione manuale lunghi, costosi e laboriosi.
Unsupervised LiDAR pseudo-labeling via geometry-grounded dynamic scene decomposition
Ghilotti, Filippo
2023/2024
Abstract
The effectiveness of computer vision methodologies is substantially dependent on the availability of extensive annotated datasets, yet the annotation process remains a critical bottleneck. This challenge is particularly acute in the field of autonomous driving, where vast quantities of unlabeled data can be readily acquired from vehicle fleets, and 3D LiDAR point clouds provide valuable geometric information. Addressing this issue, a fully unsupervised, reproducible, and general method is introduced to generate pseudo-labels for three distinct downstream tasks: semantic segmentation, object detection, and depth estimation. This method operates at scale with minimal parameter tuning by leveraging accumulated LiDAR maps, raw images and IMU measurements to automatically annotate LiDAR scans. A novel approach for static and dynamic object separation is devised, which exploits the temporal accumulation of points and labels alongside a novel Iterative Weighted Update Function to effectively distinguish between static backgrounds and dynamic, moving objects. The resulting semantic pseudo-labels and bounding boxes are validated by demonstrating their capability to replace manual annotations, achieving performance levels that approach Oracle benchmarks. Furthermore, the utilization of pseudo-labeled point clouds in training depth estimation networks leads to an average improvement of 42.41% in Mean Absolute Error (MAE) within the 0-80 meters range, 51.5% within the 80-150 meters range and 22.0% within the 150-250 meters range. Additionally, near-Oracle performance is attained in both object detection and semantic segmentation tasks. These findings underscore the significant potential of the proposed method in enhancing downstream computer vision applications through scalable and automated data annotation, thereby mitigating the reliance on labor-intensive manual annotation processes.File | Dimensione | Formato | |
---|---|---|---|
2024_Filippo_Ghilotti_Executive_Summary.pdf
non accessibile
Descrizione: Executive Summary
Dimensione
4.03 MB
Formato
Adobe PDF
|
4.03 MB | Adobe PDF | Visualizza/Apri |
2024_Filippo_Ghilotti_Thesis.pdf
non accessibile
Descrizione: Tesi
Dimensione
63.79 MB
Formato
Adobe PDF
|
63.79 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/230789