In recent years, Deep Learning (DL) models have played a critical role in the development of autonomous driving systems, performing essential tasks such as object detection, semantic segmentation, and decision-making. However, testing these models presents significant challenges, particularly in real-world sce- narios where reproducing specific driving conditions is costly, unsafe, or impossible. Traditional testing methods lack the control needed to systematically evaluate DLmodels across a broad range of conditions, especially in rare or edge cases that are difficult to recreate, such as driving in extreme weather. This thesis addresses these challenges by proposing a novel approach that leverages conditional diffusion models for controlled image generation. The proposed method extends the T2I Adapter model to support multi-conditioning, allowing it to generate test scenarios based on various inputs, such as image edges, colors, or semantic information. This enables a fine-grained control over the generation of realistic driving scenarios, significantly improving the ability to test DL models for autonomous driving under diverse conditions. The model is fine-tuned on the SHIFT dataset, a synthetic dataset collected in the CARLA simulation environment, which includes a wide variety of weather patterns, traffic conditions, and driving environments. The evaluation of the proposed solution demonstrates its effectiveness in generating valid, high-quality test cases. It also shows improved control over the generation process compared to existing state-of-the-art generative models, such as Stable Diffusion, ControlNet, and single-conditioning models. Additionally, this thesis contributes a newly augmented dataset built on top of SHIFT, along with various checkpoints of the trained model. These contributions provide a comprehensive framework for advancing the testing of ML-based autonomous driving systems, enabling more rigorous and diverse evaluation processes.
Negli ultimi anni, i modelli di Deep Learning (DL) hanno svolto un ruolo cruciale nello sviluppo di sistemi di guida autonoma, eseguendo compiti essenziali come il rilevamento di oggetti, la segmentazione semantica e la presa di decisioni. Tuttavia, il collaudo di questi modelli presenta notevoli sfide, soprattutto in scenari reali dove riprodurre specifiche condizioni di guida è costoso, pericoloso o impossibile. I metodi di test tradizionali non forniscono il controllo necessario per valutare sistematicamente i modelli di DL in una vasta gamma di condizioni, in particolare in casi rari o estremi, come la guida in condizioni meteorologiche avverse. Questa tesi affronta queste sfide proponendo un nuovo approccio che sfrutta modelli di diffusione condizionale per la generazione controllata di immagini. Il metodo proposto estende il modello T2I Adapter per supportare il multi-condizionamento, consentendo di generare scenari di test basati su vari input, come i contorni delle immagini, i colori o le informazioni semantiche. Questo permette un controllo dettagliato sulla generazione di scenari realistici di guida, migliorando significativamente la capacità di testare i modelli DL per la guida autonoma in condizioni diverse. Il modello è stato perfezionato sul dataset SHIFT, un insieme di dati sintetici raccolti nell’ambiente di simulazione CARLA, che include una vasta gamma di condizioni meteorologiche, traffico e ambienti di guida. La valutazione della soluzione proposta dimostra la sua efficacia nella generazione di casi di test validi e di alta qualità, con un miglior controllo rispetto ai modelli generativi all'avanguardia esistenti, come Stable Diffusion, ControlNet e modelli a singolo condizionamento. Inoltre, questa tesi contribuisce con un nuovo dataset aumentato basato su SHIFT, insieme a vari checkpoint del modello addestrato. Questi contributi forniscono un quadro completo per avanzare nel collaudo dei sistemi di guida autonoma basati su ML, consentendo processi di valutazione più rigorosi e diversificati.
Enhancing test generation for autonomous driving using multiple conditionings probabilistic diffusion models
ENRÍQUEZ BALLESTEROS, JAIME
2023/2024
Abstract
In recent years, Deep Learning (DL) models have played a critical role in the development of autonomous driving systems, performing essential tasks such as object detection, semantic segmentation, and decision-making. However, testing these models presents significant challenges, particularly in real-world sce- narios where reproducing specific driving conditions is costly, unsafe, or impossible. Traditional testing methods lack the control needed to systematically evaluate DLmodels across a broad range of conditions, especially in rare or edge cases that are difficult to recreate, such as driving in extreme weather. This thesis addresses these challenges by proposing a novel approach that leverages conditional diffusion models for controlled image generation. The proposed method extends the T2I Adapter model to support multi-conditioning, allowing it to generate test scenarios based on various inputs, such as image edges, colors, or semantic information. This enables a fine-grained control over the generation of realistic driving scenarios, significantly improving the ability to test DL models for autonomous driving under diverse conditions. The model is fine-tuned on the SHIFT dataset, a synthetic dataset collected in the CARLA simulation environment, which includes a wide variety of weather patterns, traffic conditions, and driving environments. The evaluation of the proposed solution demonstrates its effectiveness in generating valid, high-quality test cases. It also shows improved control over the generation process compared to existing state-of-the-art generative models, such as Stable Diffusion, ControlNet, and single-conditioning models. Additionally, this thesis contributes a newly augmented dataset built on top of SHIFT, along with various checkpoints of the trained model. These contributions provide a comprehensive framework for advancing the testing of ML-based autonomous driving systems, enabling more rigorous and diverse evaluation processes.File | Dimensione | Formato | |
---|---|---|---|
2024_10_Enriquez_Thesis.pdf
accessibile in internet per tutti
Descrizione: Thesis
Dimensione
88.2 MB
Formato
Adobe PDF
|
88.2 MB | Adobe PDF | Visualizza/Apri |
2024_10_Enriquez_executive_summary.pdf
accessibile in internet per tutti
Descrizione: Executive summary
Dimensione
4.78 MB
Formato
Adobe PDF
|
4.78 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/227412