Spacecraft landing on celestial bodies is a challenging phase for the Guidance, Navigation, and Control system. Real-time autonomous navigation is required due to rapid dynamics. Vision-based navigation using a monocular camera is a cost-effective and low complex strategy for this application. Two main navigation functions exist: Relative Navigation (RelNav) and Absolute Navigation (AbsNav). RelNav uses features with unknown coordinates to compute the pose relative to the previous frame. AbsNav, instead, determines the spacecraft pose with respect to a planetocentric reference frame using features with known location. Some knowledge of the landing environment is requested. Nonetheless, both methods have limitations. AbsNav experiences a reduction in known observable landmarks during descent, preventing state estimation during crucial final phases (altitudes below 15-20 km), while RelNav accumulates error over time, leading to drift in pose estimation. This thesis presents a novel navigation configuration to solve these limitations for pinpoint lunar landing. An Integrated Navigation, based on the fusion of information from both Relative and Absolute Navigation, is adopted. Specifically, AbsNav corrects drift, while RelNav enhances algorithm robustness and flexibility. The work focuses on the development of a suitable image processing pipeline for this integrated approach. RelNav is performed through Visual Odometry techniques, using an ORB detector with adaptive threshold and subwindowing to extract features, which are then tracked over multiple frames. AbsNav adopts craters, rilles and wrinkled ridges as landmarks for complete lunar surface coverage. YOLO is used to locate craters in the image. A novel method is developed to retrieve the crater rims from the detected bounding boxes, as well as a robust matching strategy. The algorithm performance and robustness under various illumination, camera pointing, and terrain conditions are evaluated through numerical simulations with synthetic lunar calibrated image. Benefits of the proposed image processing pipeline on the Integrated Navigation are discussed, together with the limits of the current implementation.
La fase di atterraggio su corpi celesti rappresenta una sfida significativa per il sistema di Guida, Navigazione e Controllo. A causa della rapida dinamica, la navigazione autonoma in tempo reale è indispensabile. Un'alternativa efficace ed economicamente vantaggiosa è rappresentata dalla navigazione ottica con una singola camera. Esistono due funzioni principali: Navigazione Relativa (RelNav) e Navigazione Assoluta (AbsNav). La RelNav calcola la posa relativa rispetto al fotogramma precedente usando punti di riferimento con coordinate sconosciute. La AbsNav, invece, determina la posa del veicolo spaziale rispetto a un sistema di riferimento inerziale utilizzando punti di riferimento con posizione nota. Entrambi i metodi hanno tuttavia dei limiti: AbsNav perde punti di riferimento noti durante la discesa, compromettendo la stima dello stato nelle critiche fasi finali, mentre RelNav accumula errori nel tempo, causando deriva nella stima. Questa tesi presenta una nuova configurazione di navigazione per superare questi limiti e garantire un atterraggio accurato. Una Navigazione Integrata è adottata, combinando informazioni provenienti dalla RelNav e dalla AbsNav. Nello specifico, la AbsNav corregge la deriva, mentre la RelNav migliora la robustezza e la flessibilità dell'algoritmo. Il lavoro si focalizza sullo sviluppo di un flusso di elaborazione delle immagini adeguato a questa applicazione. La RelNav utilizza tecniche di Visual Odometry con un rilevatore ORB adattivo e celle per estrarre punti chiave che vengono poi seguiti in immagini consecutive. La AbsNav adotta crateri, valli e crinali come punti di riferimento. YOLO viene utilizzato per localizzare i crateri nell'immagine mentre un nuovo metodo per ricavare i bordi dei crateri dalle bounding box rilevate e una robusta strategia per trovare corrispondenze sono stati sviluppati. Prestazioni e robustezza dell'algoritmo sono state valutate sotto varie condizioni di illuminazione, prospettiva e tipo di terreno attraverso simulazioni numeriche con immagini sintetiche della Luna. I benefici della soluzione proposta per la Navigazione Integrata sono discussi insieme alle limitazioni dell'implementazione attuale.
Integrated optical terrain relative navigation for autonomous lunar landing
Parracino, Giovanni Pio
2023/2024
Abstract
Spacecraft landing on celestial bodies is a challenging phase for the Guidance, Navigation, and Control system. Real-time autonomous navigation is required due to rapid dynamics. Vision-based navigation using a monocular camera is a cost-effective and low complex strategy for this application. Two main navigation functions exist: Relative Navigation (RelNav) and Absolute Navigation (AbsNav). RelNav uses features with unknown coordinates to compute the pose relative to the previous frame. AbsNav, instead, determines the spacecraft pose with respect to a planetocentric reference frame using features with known location. Some knowledge of the landing environment is requested. Nonetheless, both methods have limitations. AbsNav experiences a reduction in known observable landmarks during descent, preventing state estimation during crucial final phases (altitudes below 15-20 km), while RelNav accumulates error over time, leading to drift in pose estimation. This thesis presents a novel navigation configuration to solve these limitations for pinpoint lunar landing. An Integrated Navigation, based on the fusion of information from both Relative and Absolute Navigation, is adopted. Specifically, AbsNav corrects drift, while RelNav enhances algorithm robustness and flexibility. The work focuses on the development of a suitable image processing pipeline for this integrated approach. RelNav is performed through Visual Odometry techniques, using an ORB detector with adaptive threshold and subwindowing to extract features, which are then tracked over multiple frames. AbsNav adopts craters, rilles and wrinkled ridges as landmarks for complete lunar surface coverage. YOLO is used to locate craters in the image. A novel method is developed to retrieve the crater rims from the detected bounding boxes, as well as a robust matching strategy. The algorithm performance and robustness under various illumination, camera pointing, and terrain conditions are evaluated through numerical simulations with synthetic lunar calibrated image. Benefits of the proposed image processing pipeline on the Integrated Navigation are discussed, together with the limits of the current implementation.File | Dimensione | Formato | |
---|---|---|---|
2024_07_Parracino_Tesi.pdf
accessibile in internet solo dagli utenti autorizzati
Descrizione: Testo Tesi
Dimensione
25.79 MB
Formato
Adobe PDF
|
25.79 MB | Adobe PDF | Visualizza/Apri |
2024_07_Parracino_Executive_Summary.pdf
accessibile in internet per tutti
Descrizione: Executive Summary
Dimensione
3.13 MB
Formato
Adobe PDF
|
3.13 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/223914