The success of modern space exploration missions critically depends on autonomous vision systems for surface navigation and hazard avoidance. This work presents a comprehensive evaluation of deep learning architectures for semantic segmentation of lunar and small-body surfaces, comparing traditional convolutional neural networks with modern attention-based approaches. The architectures are evaluated across synthetic datasets and real mission imagery from Chang'e 3, DART, OSIRIS-REx, and Hayabusa missions, providing insights into their generalization capabilities and practical applicability. The research demonstrates that hybrid architectures combining CNN and attention mechanisms achieve superior performance, with MobileViTV2+UNet reaching highest accuracy on lunar terrain segmentation and SwiftFormer+UNet on small-body boulders segmentation. Notably, lightweight attention-based models like SegFormerB0 maintain competitive performance while significantly reducing parameter counts to only 3.71M parameters. The models demonstrate robust generalization capabilities when tested on real mission imagery, with attention-based architectures showing superior adaptation to challenging illumination conditions and varying surface characteristics. A key contribution lies in the systematic evaluation of deployment strategies on space-grade hardware platforms. The NVIDIA Jetson Orin Nano implementation, leveraging ONNX conversion and TensorRT optimization, maintains model accuracy within 1% of original PyTorch implementations while enabling efficient inference across different power modes. It achieves inference times of 3.7-5.0 ms for CNN-based and 7.8-9.4 ms for attention-based architectures in 15W mode, with times approximately doubling in 7W mode. Comparative analysis with the Google Coral Edge TPU shows different trade-offs, achieving inference times of 219.77 ms with lower power consumption (2W). These findings advance the understanding of practical deep learning deployment for space applications, demonstrating that modern architectures can achieve robust segmentation performance while meeting the computational and power constraints of space hardware.
L'efficacia delle missioni di esplorazione spaziale moderne è strettamente correlata allo sviluppo di sistemi di visione artificiale per la navigazione autonoma e l'identificazione degli ostacoli superficiali. Il presente lavoro di tesi propone un'analisi comparativa delle architetture di deep learning per la segmentazione semantica di superfici lunari e di corpi minori, confrontando reti neurali convoluzionali con approcci basati su meccanismi di attenzione. La validazione sperimentale è stata condotta sia su dataset sintetici che su immagini acquisite durante le missioni Chang'e 3, DART, OSIRIS-REx e Hayabusa, al fine di valutarne le capacità di generalizzazione. I risultati sperimentali evidenziano come le architetture ibride, che integrano CNN e meccanismi di attenzione, conseguano prestazioni ottimali: MobileViTV2+U-Net presenta accuratezza superiore nella segmentazione del terreno lunare, mentre SwiftFormer+U-Net eccelle nella segmentazione dei massi su corpi minori. È significativo notare come architetture compatte basate su meccanismi di attenzione, quali SegFormerB0, mantengano prestazioni competitive pur richiedendo solamente 3.71M parametri. I modelli dimostrano robusta generalizzazione su immagini reali, con particolare efficacia delle architetture basate sull'attenzione in condizioni di illuminazione critiche e morfologie superficiali complesse. L'implementazione su NVIDIA Jetson Orin Nano, mediante conversione ONNX e ottimizzazione TensorRT, preserva l'accuratezza entro l'1% rispetto alle implementazioni PyTorch di riferimento. I tempi di inferenza si attestano su 3.7-5.0 ms per architetture CNN e 7.8-9.4 ms per quelle basate sull'attenzione in modalità 15W, con incremento proporzionale in modalità 7W. L'analisi comparativa su Google Coral Edge TPU evidenzia tempi di inferenza maggiori (219.77 ms) a fronte di un consumo energetico ridotto (2W). I risultati ottenuti avanzano lo stato dell'arte nell'implementazione di metodologie di deep learning per applicazioni spaziali, dimostrando che le architetture moderne offrono prestazioni robuste rispettando i vincoli computazionali ed energetici dell'hardware qualificato per lo spazio.
Convolutional and attention-based networks for segmentation of lunar and small-bodies surfaces: from design to hardware deployment
Mondini, Umberto
2024/2025
Abstract
The success of modern space exploration missions critically depends on autonomous vision systems for surface navigation and hazard avoidance. This work presents a comprehensive evaluation of deep learning architectures for semantic segmentation of lunar and small-body surfaces, comparing traditional convolutional neural networks with modern attention-based approaches. The architectures are evaluated across synthetic datasets and real mission imagery from Chang'e 3, DART, OSIRIS-REx, and Hayabusa missions, providing insights into their generalization capabilities and practical applicability. The research demonstrates that hybrid architectures combining CNN and attention mechanisms achieve superior performance, with MobileViTV2+UNet reaching highest accuracy on lunar terrain segmentation and SwiftFormer+UNet on small-body boulders segmentation. Notably, lightweight attention-based models like SegFormerB0 maintain competitive performance while significantly reducing parameter counts to only 3.71M parameters. The models demonstrate robust generalization capabilities when tested on real mission imagery, with attention-based architectures showing superior adaptation to challenging illumination conditions and varying surface characteristics. A key contribution lies in the systematic evaluation of deployment strategies on space-grade hardware platforms. The NVIDIA Jetson Orin Nano implementation, leveraging ONNX conversion and TensorRT optimization, maintains model accuracy within 1% of original PyTorch implementations while enabling efficient inference across different power modes. It achieves inference times of 3.7-5.0 ms for CNN-based and 7.8-9.4 ms for attention-based architectures in 15W mode, with times approximately doubling in 7W mode. Comparative analysis with the Google Coral Edge TPU shows different trade-offs, achieving inference times of 219.77 ms with lower power consumption (2W). These findings advance the understanding of practical deep learning deployment for space applications, demonstrating that modern architectures can achieve robust segmentation performance while meeting the computational and power constraints of space hardware.File | Dimensione | Formato | |
---|---|---|---|
2025_04_Mondini_Tesi.pdf
accessibile in internet per tutti a partire dal 04/03/2026
Descrizione: Testo della Tesi
Dimensione
48.06 MB
Formato
Adobe PDF
|
48.06 MB | Adobe PDF | Visualizza/Apri |
2025_04_Mondini_Executive_Summary.pdf
accessibile in internet per tutti
Descrizione: Testo dell'Executive Summary
Dimensione
1.76 MB
Formato
Adobe PDF
|
1.76 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/236467