Autonomy in spacecraft relative navigation is an enabling technology for incoming space missions that gained an increasing interest in the last years due to the broad application scenarios and the possibilities enabled for future mission concepts. The work presented in this Thesis focuses on the enhancement of vision-based algorithms within the context of autonomous spacecraft relative navigation about uncooperative-known targets. The work starts by tackling the problem of generating photorealistic synthetic spaceborne images, proposing two different rendering tools and a novel high-fidelity sensor model validated against an actual sensor. Two pose estimation algorithms are presented, both aimed at maximizing the deployability onboard, one leveraging two lightweight CNNs for object detection and line regression and another employing a computationally efficient multitasking model for both object detection and keypoints regression, testing them even in relative navigation pipelines, ensuring a strategic balance between accuracy and processing time. The possibility of relying on visible and infrared sensor fusion to prevent the limitations of visible-range sensors is tackled in the development of two LOS estimation algorithms for far-range target and inertial pointing scenarios leveraging long-exposure images, leading to increased reliability of the image processing pipelines. The study concludes with a feasibility study on visible and thermal image fusion for relative pose estimation, showcasing promising results and potential solutions to challenges in vision-based navigation for autonomous spacecraft.Autonomy in spacecraft relative navigation is an enabling technology for incoming space missions that gained an increasing interest in the last years due to the broad application scenarios and the possibilities enabled for future mission concepts. The work presented in this Thesis focuses on the enhancement of vision-based algorithms within the context of autonomous spacecraft relative navigation about uncooperative-known targets. The work starts by tackling the problem of generating photorealistic synthetic spaceborne images, proposing two different rendering tools and a novel high-fidelity sensor model validated against an actual sensor. Two pose estimation algorithms are presented, both aimed at maximizing the deployability onboard, one leveraging two lightweight CNNs for object detection and line regression and another employing a computationally efficient multitasking model for both object detection and keypoints regression, testing them even in relative navigation pipelines, ensuring a strategic balance between accuracy and processing time. The possibility of relying on visible and infrared sensor fusion to prevent the limitations of visible-range sensors is tackled in the development of two LOS estimation algorithms for far-range target and inertial pointing scenarios leveraging long-exposure images, leading to increased reliability of the image processing pipelines. The study concludes with a feasibility study on visible and thermal image fusion for relative pose estimation, showcasing promising results and potential solutions to challenges in vision-based navigation for autonomous spacecraft.
L'autonomia nella navigazione relativa satelliti è una tecnologia chiave per le future missioni spaziali che ha suscitato un interesse crescente negli ultimi anni a causa dei vasti scenari di applicazione e delle possibilità offerte per i futuri concetti di missione. Il lavoro presentato in questa Tesi si concentra sul miglioramento degli algoritmi basati sulla visione artificiale nell’ambito della navigazione relativa e autonoma di satelliti rispetto a target noti non cooperativi. Il lavoro inizia affrontando il problema della generazione di immagini sintetiche spaziali fotorealistiche, proponendo due diversi strumenti di rendering e un nuovo modello di sensore ad alta fedeltà validato rispetto a un sensore reale. Vengono presentati poi due algoritmi di stima della posa relativa, entrambi mirati a massimizzare la capacità di implementazione a bordo. Il primo si basa su due reti neurali convoluzionali leggere per il rilevamento del target nell’immagine e la regressione delle linee corrispondenti ad elementi prestabiliti del target stesso. L’altro si basa su un modello di rete convoluzionale multitasking computazionalmente efficiente che viene impiegato sia per il rilevamento del target che per la regressione di keypoints prestabiliti. Entrambi gli algoritmi sono stati testati approfonditamente, anche in pipeline di navigazione relativa, garantendo un bilanciamento ottimale tra precisione nella stima della posa relativa e tempo di calcolo. Viene poi affrontata la possibilità di fare affidamento sulla fusione dei dati estratti da sensori visibili e infrarossi per prevenire i limiti dei sensori operanti nella gamma visibile discutendo due algoritmi sviluppati per la stima della line of sight per target a ad alta distanza, concentrandosi sugli scenari di puntamento sia inerziale che del target, sfruttando immagini a lunga esposizione. L’utilizzo della fusione delle informazioni estratte dalle immagini riprese dai due sensori visibile e termico ha portato ad un aumento dell’affidabilità e della robustezza delle due pipeline di elaborazione delle immagini. La Tesi si conclude presentando uno studio di fattibilità sulla fusione delle immagini visibili e termiche per la stima della posa relativa, mostrando risultati promettenti e potenziali soluzioni alle sfide della navigazione autonoma per satelliti basata sulla visione.
Monocular vision for uncooperative targets through AI-based methods and sensors fusion
BECHINI, MICHELE
2023/2024
Abstract
Autonomy in spacecraft relative navigation is an enabling technology for incoming space missions that gained an increasing interest in the last years due to the broad application scenarios and the possibilities enabled for future mission concepts. The work presented in this Thesis focuses on the enhancement of vision-based algorithms within the context of autonomous spacecraft relative navigation about uncooperative-known targets. The work starts by tackling the problem of generating photorealistic synthetic spaceborne images, proposing two different rendering tools and a novel high-fidelity sensor model validated against an actual sensor. Two pose estimation algorithms are presented, both aimed at maximizing the deployability onboard, one leveraging two lightweight CNNs for object detection and line regression and another employing a computationally efficient multitasking model for both object detection and keypoints regression, testing them even in relative navigation pipelines, ensuring a strategic balance between accuracy and processing time. The possibility of relying on visible and infrared sensor fusion to prevent the limitations of visible-range sensors is tackled in the development of two LOS estimation algorithms for far-range target and inertial pointing scenarios leveraging long-exposure images, leading to increased reliability of the image processing pipelines. The study concludes with a feasibility study on visible and thermal image fusion for relative pose estimation, showcasing promising results and potential solutions to challenges in vision-based navigation for autonomous spacecraft.Autonomy in spacecraft relative navigation is an enabling technology for incoming space missions that gained an increasing interest in the last years due to the broad application scenarios and the possibilities enabled for future mission concepts. The work presented in this Thesis focuses on the enhancement of vision-based algorithms within the context of autonomous spacecraft relative navigation about uncooperative-known targets. The work starts by tackling the problem of generating photorealistic synthetic spaceborne images, proposing two different rendering tools and a novel high-fidelity sensor model validated against an actual sensor. Two pose estimation algorithms are presented, both aimed at maximizing the deployability onboard, one leveraging two lightweight CNNs for object detection and line regression and another employing a computationally efficient multitasking model for both object detection and keypoints regression, testing them even in relative navigation pipelines, ensuring a strategic balance between accuracy and processing time. The possibility of relying on visible and infrared sensor fusion to prevent the limitations of visible-range sensors is tackled in the development of two LOS estimation algorithms for far-range target and inertial pointing scenarios leveraging long-exposure images, leading to increased reliability of the image processing pipelines. The study concludes with a feasibility study on visible and thermal image fusion for relative pose estimation, showcasing promising results and potential solutions to challenges in vision-based navigation for autonomous spacecraft.File | Dimensione | Formato | |
---|---|---|---|
2024_02_Bechini_phdthesis.pdf
accessibile in internet per tutti a partire dal 08/02/2025
Descrizione: PhD Thesis
Dimensione
83.31 MB
Formato
Adobe PDF
|
83.31 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/216793