In the recent years, the Moon has been identified as a key testing ground to develop and enhance technologies for future deep-space exploration missions. As such, there has been a global resurgence in lunar missions, attracting participation from both space agencies and commercial actors. Although these missions target different objectives, they all require precise knowledge of the spacecraft trajectory, as well as a robust communication infrastructure to facilitate the transmission of large volumes of data back to Earth. However, as the number of potential users continues to grow, the existing ground segment infrastructure will likely be insufficient to meet the increased demand. As a result, a significant increase in the autonomy level of Guidance, Navigation and Control (GNC) systems for lunar users could be requested in the near future. Current GNC architectures primarily rely on Vision-Based Navigation (VBN) techniques, which exploit optical cameras and Light Detection and Ranging (LiDAR) sensors to estimate the spacecraft state relative to the surrounding environment. While visual cameras have favourable size, weight and power characteristics, their performance are highly sensitive to changes in illumination conditions and surface properties. LiDAR systems, on the other hand, are limited to the terminal phases of landing trajectories or rendezvous operations. Recent failed lunar landing attempts further highlight that substantial challenges still remain in the development of reliable and fully autonomous GNC systems. For this reason, the European Space Agency (ESA) has launched the Moonlight initiative to foster the development of a dedicated Lunar Communication and Navigation System (LCNS) that offers a one-way navigation service exploiting a small constellation of satellites in lunar orbits. Nevertheless, at the beginning of the constellations setup, the limited availability of navigation signals will pose a further challenge. To overcome these limitations, this research work focuses on the development and validation of an autonomous navigation algorithm that integrates vision-based and radiometric measurements. The work begins with the establishment of a robust vision-based navigation pipeline, which will serve as a baseline for future performance comparisons. To this end, a synthetic image rendering engine is developed to generate realistic images of the lunar surface, enabling systematic testing and validation of the vision-based algorithms in a wide variety of scenarios. Building upon this framework, several crater detection networks are trained using both synthetic and real image datasets. The networks have then been extensively tested and compared on common benchmarks to assess their generalization capabilities. The algorithms were further validated on real images from the Chang'e missions to demonstrate their applicability in operational scenarios. The end-to-end performance of the entire crater-based navigation pipeline were then evaluated against a public dataset, and showed significant performance benefits with respect to existing benchmarks. In parallel, relative VBN techniques have been investigated to support operations in regions where craters cannot be detected or in which catalogue information is unavailable. The algorithm performance were evaluated as function of the most popular feature detectors, and the results highlighted that deep learning-based methods do not offer clear advantages in terms of accuracy nor computational times with respect to traditional computer vision algorithms. Finally, an integrated navigation architecture combining optical measurements with one-way ranging signals from a regional lunar navigation constellation was developed and evaluated. Extensive simulation campaigns have been conducted for low lunar orbit and landing scenarios, comparing standalone vision-based, radiometric, and integrated solutions. The results demonstrate that the fusion of visual and optical observables significantly improves the navigation performance and robustness with respect to standalone approaches, particularly under conditions of limited signal availability or poor illumination conditions.
Negli ultimi anni, la Luna è stata identificata come un banco di prova fondamentale per lo sviluppo e il perfezionamento delle tecnologie destinate alle future missioni di esplorazione dello spazio profondo. Si è pertanto assistito a una rinnovata spinta globale verso le missioni lunari, con la partecipazione sia di agenzie spaziali che di attori commerciali. Sebbene queste missioni perseguano obiettivi diversi, tutte richiedono una conoscenza precisa della traiettoria del veicolo spaziale, nonché un'infrastruttura di comunicazione robusta per consentire la trasmissione di grandi volumi di dati verso la Terra. Tuttavia, con la continua crescita del numero di potenziali utenti, è probabile che l’infrastruttura esistente del segmento di terra risulterà insufficiente a soddisfare la crescente domanda. Di conseguenza, nel prossimo futuro potrebbe rendersi necessario un significativo aumento del livello di autonomia dei sistemi di Guida, Navigazione e Controllo (GNC) per gli utenti lunari. Le architetture GNC attualmente in uso si basano principalmente su tecniche di navigazione visiva (VBN), che sfruttano telecamere ottiche e sensori LiDAR per stimare lo stato del veicolo spaziale rispetto all'ambiente circostante. Sebbene le telecamere ottiche presentino caratteristiche favorevoli in termini di dimensioni, peso e consumo energetico, le loro prestazioni sono fortemente sensibili alle variazioni delle condizioni di illuminazione e delle proprietà della superficie. I sistemi LiDAR, d'altro canto, sono limitati alle fasi terminali delle traiettorie di atterraggio o alle operazioni di rendez-vous. I recenti tentativi falliti di allunaggio evidenziano ulteriormente come sussistano ancora sfide considerevoli nello sviluppo di sistemi GNC affidabili e pienamente autonomi. Per questo motivo, l'Agenzia Spaziale Europea (ESA) ha lanciato l'iniziativa Moonlight, con l'obiettivo di promuovere lo sviluppo di un Sistema di Comunicazione e Navigazione Lunare (LCNS) per offrire un servizio di navigazione mono-direzionale sfruttando una piccola costellazione di satelliti in orbita lunare. Tuttavia, nelle fasi iniziali di dispiegamento della costellazione, la limitata disponibilità di segnali di navigazione rappresenterà un'ulteriore sfida. Per superare questi limiti, il presente lavoro di ricerca si concentra sullo sviluppo e la validazione di un algoritmo di navigazione autonoma che integra misure basate sulla visione e misure radiometriche. Il lavoro prende avvio con la definizione di una pipeline di navigazione visiva da usare come riferimento per i successivi confronti prestazionali. A tal fine, viene sviluppato un motore di rendering sintetico di immagini per generare immagini realistiche della superficie lunare, consentendo test e validazioni sistematici degli algoritmi VBN in un'ampia varietà di scenari. Su questa base, vengono addestrate diverse reti di rilevamento di crateri utilizzando dataset di immagini sia sintetiche che reali. Le reti sono state poi ampiamente testate e confrontate su benchmark comuni per valutarne le capacità di generalizzazione, e ulteriormente validate su immagini reali delle missioni Chang'e per dimostrarne l'applicabilità in scenari operativi. Le prestazioni end-to-end dell'intera pipeline di navigazione basata sui crateri sono state quindi valutate su un dataset pubblico, mostrando significativi miglioramenti rispetto ai benchmark esistenti. In parallelo, sono state investigate tecniche di VBN relativa per supportare le operazioni in regioni in cui i crateri non sono rilevabili o le informazioni di catalogo non sono disponibili. Le prestazioni degli algoritmi sono state valutate in funzione dei più rilevatori di feature più diffusi, e i risultati hanno evidenziato che i metodi basati sul deep learning non offrono vantaggi chiari in termini di accuratezza né di tempi computazionali rispetto agli algoritmi tradizionali di visione artificiale. Infine, è stata sviluppata e valutata un'architettura di navigazione integrata che combina misure ottiche con segnali di ranging mono-direzionale provenienti da una costellazione di navigazione lunare. Sono state condotte estese campagne di simulazione per scenari in orbita lunare bassa e di atterraggio, confrontando soluzioni standalone basate sulla visione, radiometriche e integrate. I risultati dimostrano che la fusione di osservabili visivi e ottici migliora significativamente le prestazioni di navigazione e la robustezza rispetto agli approcci standalone, in particolare in condizioni di limitata disponibilità dei segnali o di scarsa illuminazione.
Autonomous lunar navigation through vision-based and radiometric measurements
Ceresoli, Michele
2025/2026
Abstract
In the recent years, the Moon has been identified as a key testing ground to develop and enhance technologies for future deep-space exploration missions. As such, there has been a global resurgence in lunar missions, attracting participation from both space agencies and commercial actors. Although these missions target different objectives, they all require precise knowledge of the spacecraft trajectory, as well as a robust communication infrastructure to facilitate the transmission of large volumes of data back to Earth. However, as the number of potential users continues to grow, the existing ground segment infrastructure will likely be insufficient to meet the increased demand. As a result, a significant increase in the autonomy level of Guidance, Navigation and Control (GNC) systems for lunar users could be requested in the near future. Current GNC architectures primarily rely on Vision-Based Navigation (VBN) techniques, which exploit optical cameras and Light Detection and Ranging (LiDAR) sensors to estimate the spacecraft state relative to the surrounding environment. While visual cameras have favourable size, weight and power characteristics, their performance are highly sensitive to changes in illumination conditions and surface properties. LiDAR systems, on the other hand, are limited to the terminal phases of landing trajectories or rendezvous operations. Recent failed lunar landing attempts further highlight that substantial challenges still remain in the development of reliable and fully autonomous GNC systems. For this reason, the European Space Agency (ESA) has launched the Moonlight initiative to foster the development of a dedicated Lunar Communication and Navigation System (LCNS) that offers a one-way navigation service exploiting a small constellation of satellites in lunar orbits. Nevertheless, at the beginning of the constellations setup, the limited availability of navigation signals will pose a further challenge. To overcome these limitations, this research work focuses on the development and validation of an autonomous navigation algorithm that integrates vision-based and radiometric measurements. The work begins with the establishment of a robust vision-based navigation pipeline, which will serve as a baseline for future performance comparisons. To this end, a synthetic image rendering engine is developed to generate realistic images of the lunar surface, enabling systematic testing and validation of the vision-based algorithms in a wide variety of scenarios. Building upon this framework, several crater detection networks are trained using both synthetic and real image datasets. The networks have then been extensively tested and compared on common benchmarks to assess their generalization capabilities. The algorithms were further validated on real images from the Chang'e missions to demonstrate their applicability in operational scenarios. The end-to-end performance of the entire crater-based navigation pipeline were then evaluated against a public dataset, and showed significant performance benefits with respect to existing benchmarks. In parallel, relative VBN techniques have been investigated to support operations in regions where craters cannot be detected or in which catalogue information is unavailable. The algorithm performance were evaluated as function of the most popular feature detectors, and the results highlighted that deep learning-based methods do not offer clear advantages in terms of accuracy nor computational times with respect to traditional computer vision algorithms. Finally, an integrated navigation architecture combining optical measurements with one-way ranging signals from a regional lunar navigation constellation was developed and evaluated. Extensive simulation campaigns have been conducted for low lunar orbit and landing scenarios, comparing standalone vision-based, radiometric, and integrated solutions. The results demonstrate that the fusion of visual and optical observables significantly improves the navigation performance and robustness with respect to standalone approaches, particularly under conditions of limited signal availability or poor illumination conditions.| File | Dimensione | Formato | |
|---|---|---|---|
|
phdthesis_micheleceresoli.pdf
solo utenti autorizzati a partire dal 16/03/2029
Dimensione
111.1 MB
Formato
Adobe PDF
|
111.1 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/255057