Manipulation tasks in space operations must satisfy strict safety, energy, and timing constraints under uncertainty, but classical control alone often struggles to meet these requirements robustly. This study investigates a deep reinforcement learning (DRL) approach in which a 7-DoF limb uses Proximal Policy Optimization (PPO) for manipulation under lunar-like conditions. The core contribution is a systematic, one-factor-at-a-time (OAT) hyperparameter analysis, covering clip range, learning rate, update cadence (epochs, mini-batches, steps per environment), and advantage/entropy settings, to isolate causal effects, tune for stable and safe learning dynamics, and establish a defensible baseline. A robustness analysis then re-evaluates and re-tunes the policy under realistic perturbations (domain randomization of target pose and distance, observation noise and initial-pose shifts) to quantify generalization beyond nominal training. In simulation, the tuned policy achieves reliable approach-and-align behaviour within the trained perturbations envelope, with smooth updates (stable KL and clip fractions), reduced constraint violations, and success rates that degrade proportionally, and predictably, only when pushed outside that envelope. Re-tuning under perturbations yields further stability gains and improved near-threshold performance, confirming that hyperparameter optima shift with environment variability. Finally, lunar-analogue field tests validated policy inference on hardware, demonstrating practical sim-to-real transferability of the learned controller and the effectiveness of the proposed workflow.
Le operazioni di manipolazione nello spazio devono rispettare rigorosi vincoli di sicurezza, energia e tempi in condizioni di incertezza, ma il controllo classico da solo spesso fatica a garantire tale robustezza. Questo studio esplora un approccio di deep reinforcement learning (DRL) in cui un manipolatore a 7 gradi di libertà utilizza Proximal Policy Optimization (PPO) in condizioni simil-lunari. Il contributo principale è un’analisi degli iperparametri one-factor-at-a-time (OAT), che include intervallo di clipping, learning rate, cadenza degli aggiornamenti (epoche, mini-batch, passi per ambiente) e impostazioni di vantaggio/entropia, per isolare gli effetti causali, eseguire un tuning verso dinamiche di apprendimento stabili e sicure e definire una baseline solida. Segue un’analisi di robustezza che ri-valuta e ri-ottimizza la policy sotto perturbazioni realistiche (domain randomization della posa e della distanza del target, rumore di osservazione e variazioni della posa iniziale) per quantificare la generalizzazione oltre il training nominale. In simulazione, la policy ottimizzata mostra un comportamento affidabile di avvicinamento e allineamento entro il range di perturbazioni del training, con aggiornamenti regolari (KL e frazioni di clipping stabili), minori violazioni dei vincoli e tassi di successo che degradano in modo proporzionale, e prevedibile, solo quando si esce da tale range. La ri-ottimizzazione in presenza di perturbazioni porta ulteriori guadagni di stabilità e migliori prestazioni vicino alle soglie di successo, confermando che gli iperparametri ottimi dipendono dalla variabilità dell’ambiente. Infine, l’inferenza della policy sull’hardware è stata validata in ambienti analoghi a quelli lunari, dimostrando la trasferibilità del controllore appreso e l’efficacia del workflow proposto.
Deep reinforcement learning manipulation for a modular reconfigurable lunar robot: hyperparameter and robustness analysis
Marsigli Rossi Lombardi, Edoardo
2024/2025
Abstract
Manipulation tasks in space operations must satisfy strict safety, energy, and timing constraints under uncertainty, but classical control alone often struggles to meet these requirements robustly. This study investigates a deep reinforcement learning (DRL) approach in which a 7-DoF limb uses Proximal Policy Optimization (PPO) for manipulation under lunar-like conditions. The core contribution is a systematic, one-factor-at-a-time (OAT) hyperparameter analysis, covering clip range, learning rate, update cadence (epochs, mini-batches, steps per environment), and advantage/entropy settings, to isolate causal effects, tune for stable and safe learning dynamics, and establish a defensible baseline. A robustness analysis then re-evaluates and re-tunes the policy under realistic perturbations (domain randomization of target pose and distance, observation noise and initial-pose shifts) to quantify generalization beyond nominal training. In simulation, the tuned policy achieves reliable approach-and-align behaviour within the trained perturbations envelope, with smooth updates (stable KL and clip fractions), reduced constraint violations, and success rates that degrade proportionally, and predictably, only when pushed outside that envelope. Re-tuning under perturbations yields further stability gains and improved near-threshold performance, confirming that hyperparameter optima shift with environment variability. Finally, lunar-analogue field tests validated policy inference on hardware, demonstrating practical sim-to-real transferability of the learned controller and the effectiveness of the proposed workflow.| File | Dimensione | Formato | |
|---|---|---|---|
|
2025_12_MarsigliRossiLombardi_Tesi.pdf
accessibile in internet solo dagli utenti autorizzati
Dimensione
106.43 MB
Formato
Adobe PDF
|
106.43 MB | Adobe PDF | Visualizza/Apri |
|
2025_12_MarsigliRossiLombardi_Executive Summary.pdf
accessibile in internet per tutti
Dimensione
7.49 MB
Formato
Adobe PDF
|
7.49 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/245797