The space industry is experiencing unprecedented growth, leveraging innovative space-based solutions to deliver new benefits. Particularly, a new wave of deep-space missions focused on the exploration and exploitation of Solar System resources is emerging. In this context, small celestial bodies like asteroids and comets present themselves as compelling targets due to their ancient origins, potential for resource extractions, and relevance to planetary defense. However, operating a spacecraft in proximity to these bodies poses complex challenges under the existing ground-based control paradigm. Motivated by the demand for rapid decision-making and cost-efficient solutions, the field is rapidly advancing toward autonomous spacecraft operations. Within this context, the introduction of artificial intelligence in guidance and control aims to enhance real-time decision-making and path optimization, enabling intelligent spacecraft to operate effortlessly and autonomously. In particular, reinforcement learning emerges as an excellent tool for on board planning due to its observation-action-observation feedback loop and its capacity to adapt to various uncertainties, including unknown or unmodeled dynamics, control execution errors, and measurement noise. This thesis explores the development of a guidance and control policy for observing specific features on an asteroid surface using reinforcement learning. A novel formulation of the problem as Markov Decision Process is presented, and the Proximal Policy Optimization algorithm is employed to train the policy within a custom-designed neural network. An adaptive policy that responds to environmental changes is achieved by randomizing the initial conditions and the parameters defining the task context during training. Several robustness tests are conducted to assess the policy ability to adapt to environments different from the one used during training. The policy demonstrates effectiveness in guiding the spacecraft to observe the features on the asteroid surface, even in presence of measurement and control action errors, highlighting reinforcement learning as a powerful and effective approach for autonomous guidance and control.
L'industria spaziale sta vivendo una crescita senza precedenti, sfruttando soluzioni innovative per offrire nuovi benefici. In particolare, una nuova ondata di missioni orientate all'esplorazione e sfruttamento delle risorse del Sistema Solare è in pieno sviluppo. In questo contesto, piccoli corpi celesti come asteroidi e comete sono obiettivi di grande interesse, sia per le loro antiche origini, sia per il potenziale legato all'estrazione di risorse che per la loro importanza per la difesa planetaria. Tuttavia, operare in prossimità di questi corpi comporta sfide complesse, difficili da gestire con il paradigma attuale di controllo da Terra. L’esigenza di prendere decisioni rapide e di ottimizzare i costi sta quindi spingendo il settore verso soluzioni sempre più autonome. In questo contesto, l’introduzione dell’intelligenza artificiale per la guida e controllo mira a migliorare la capacità di prendere decisioni in tempo reale, permettendo a veicoli spaziali intelligenti di operare in modo autonomo ed efficiente. In particolare, il reinforcement learning risulta eccellente per la pianificazione a bordo, grazie al ciclo di feedback osservazione-azione-osservazione e alla capacità di adattarsi a incertezze come dinamiche sconosciute o non modellate, errori di esecuzione dei comandi e rumore nelle misurazioni. Questa tesi sviluppa una strategia di guida e controllo per l’osservazione di un asteroide basata sul reinforcement learning. Il problema è formulato come un processo decisionale di Markov, e la politica di guida è addestrata con una rete neurale personalizzata tramite l'algoritmo Proximal Policy Optimization. Grazie alla randomizzazione delle condizioni iniziali e dei parametri di contesto durante l’addestramento, la politica risulta adattiva. Diversi test di robustezza evidenziano la capacità della politica di adattarsi ad ambienti differenti rispetto a quello usato durante l'addestramento. I risultati dimostrano l’efficacia della politica nel guidare la navicella verso le caratteristiche della superficie dell’asteroide anche in presenza di errori di misurazione e controllo, sottolineando il reinforcement learning come un approccio potente ed efficace per la guida e il controllo autonomi.
Reinforcement Learning-based path planning for autonomous small body exploration
Sclausero, Michele
2023/2024
Abstract
The space industry is experiencing unprecedented growth, leveraging innovative space-based solutions to deliver new benefits. Particularly, a new wave of deep-space missions focused on the exploration and exploitation of Solar System resources is emerging. In this context, small celestial bodies like asteroids and comets present themselves as compelling targets due to their ancient origins, potential for resource extractions, and relevance to planetary defense. However, operating a spacecraft in proximity to these bodies poses complex challenges under the existing ground-based control paradigm. Motivated by the demand for rapid decision-making and cost-efficient solutions, the field is rapidly advancing toward autonomous spacecraft operations. Within this context, the introduction of artificial intelligence in guidance and control aims to enhance real-time decision-making and path optimization, enabling intelligent spacecraft to operate effortlessly and autonomously. In particular, reinforcement learning emerges as an excellent tool for on board planning due to its observation-action-observation feedback loop and its capacity to adapt to various uncertainties, including unknown or unmodeled dynamics, control execution errors, and measurement noise. This thesis explores the development of a guidance and control policy for observing specific features on an asteroid surface using reinforcement learning. A novel formulation of the problem as Markov Decision Process is presented, and the Proximal Policy Optimization algorithm is employed to train the policy within a custom-designed neural network. An adaptive policy that responds to environmental changes is achieved by randomizing the initial conditions and the parameters defining the task context during training. Several robustness tests are conducted to assess the policy ability to adapt to environments different from the one used during training. The policy demonstrates effectiveness in guiding the spacecraft to observe the features on the asteroid surface, even in presence of measurement and control action errors, highlighting reinforcement learning as a powerful and effective approach for autonomous guidance and control.File | Dimensione | Formato | |
---|---|---|---|
2024_12_Sclausero_Executive_Summary.pdf
accessibile in internet per tutti
Descrizione: Executive Summary
Dimensione
577.32 kB
Formato
Adobe PDF
|
577.32 kB | Adobe PDF | Visualizza/Apri |
2024_12_Sclausero_Thesis.pdf
accessibile in internet per tutti
Descrizione: Testo della tesi
Dimensione
3.96 MB
Formato
Adobe PDF
|
3.96 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/230396