The purpose of this thesis is to investigate the enhancement of classical guidance algo- rithms for close-proximity operations through the use of reinforcement learning. Close- proximity maneuvers, such as rendezvous, docking, and in-orbit servicing, require high levels of precision and reliability, and their importance is steadily increasing with the growing demand for on-orbit servicing missions. Meeting these stringent operational requirements calls for guidance strategies that are both accurate and computationally efficient. Classical guidance approaches for these operations are well established, but they present certain limitations, particularly in the selection and tuning of control parameters. In this work, a Proximal Policy Optimization (PPO) algorithm is employed to enhance an Artificial Potential Field guidance law combined with a Sliding Mode Controller. The objective is to adaptively tune the sliding-mode gain in order to improve performance, with specific emphasis on reducing propellant consumption. The results show that the reinforcement learning enhanced controller achieves a signifi- cant reduction in propellant usage while preserving comparable trajectory duration and docking accuracy. A statistical analysis under uncertain initial conditions further confirms the robustness of the trained policy within the tested range. Overall, this work demonstrates the feasibility of integrating reinforcement learning as an adaptive tuning mechanism for classical guidance laws, combining the reliability of analytical control structures with the flexibility of data-driven optimization.
Lo scopo di questa tesi è investigare il miglioramento degli algoritmi classici di guida per operazioni di close-proximity mediante l’utilizzo del reinforcement learning. Le manovre di close-proximity, quali rendezvous, docking e missioni di in-orbit servicing, richiedono elevati livelli di precisione e affidabilità, e la loro importanza è in costante crescita a causa dell’aumento delle missioni di servizio in orbita. Soddisfare questi stringenti requisiti operativi richiede strategie di guida che siano al tempo stesso accurate ed efficienti dal punto di vista computazionale. Gli approcci classici alla guida per tali operazioni sono ampiamente consolidati, ma pre- sentano alcune limitazioni, in particolare nella selezione e nella taratura dei parametri di controllo. In questo lavoro viene impiegato un algoritmo di Proximal Policy Optimization (PPO) per migliorare una legge di guida basata su Artificial Potential Field combinata con un controllore Sliding Mode. L’obiettivo è adattare dinamicamente il guadagno dello sliding mode al fine di migliorare le prestazioni, con particolare attenzione alla riduzione del consumo di propellente. I risultati mostrano che il controllore potenziato tramite reinforcement learning consente una significativa riduzione del consumo di propellente, mantenendo al contempo la durata della traiettoria e la precisione di docking. Un’analisi statistica condotta in presenza di incertezze sulle condizioni iniziali conferma inoltre la robustezza della policy addestrata nell’intervallo considerato. Nel complesso, questo lavoro dimostra la fattibilità dell’integrazione del reinforcement learning come meccanismo adattivo di regolazione per leggi di guida classiche, combinando l’affidabilità delle strutture di controllo analitiche con la flessibilità dell’ottimizzazione data-driven.
Reinforcement learning-based guidance algorithm for close proximity operations
MANCHAY RAMIREZ, GIULIA STEFANY
2024/2025
Abstract
The purpose of this thesis is to investigate the enhancement of classical guidance algo- rithms for close-proximity operations through the use of reinforcement learning. Close- proximity maneuvers, such as rendezvous, docking, and in-orbit servicing, require high levels of precision and reliability, and their importance is steadily increasing with the growing demand for on-orbit servicing missions. Meeting these stringent operational requirements calls for guidance strategies that are both accurate and computationally efficient. Classical guidance approaches for these operations are well established, but they present certain limitations, particularly in the selection and tuning of control parameters. In this work, a Proximal Policy Optimization (PPO) algorithm is employed to enhance an Artificial Potential Field guidance law combined with a Sliding Mode Controller. The objective is to adaptively tune the sliding-mode gain in order to improve performance, with specific emphasis on reducing propellant consumption. The results show that the reinforcement learning enhanced controller achieves a signifi- cant reduction in propellant usage while preserving comparable trajectory duration and docking accuracy. A statistical analysis under uncertain initial conditions further confirms the robustness of the trained policy within the tested range. Overall, this work demonstrates the feasibility of integrating reinforcement learning as an adaptive tuning mechanism for classical guidance laws, combining the reliability of analytical control structures with the flexibility of data-driven optimization.| File | Dimensione | Formato | |
|---|---|---|---|
|
2026_03_ManchayRamirez.pdf
accessibile in internet per tutti
Dimensione
3.54 MB
Formato
Adobe PDF
|
3.54 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/251477