Wheel-less snake robots have proved to be a promising solution for search and rescue operations or industrial inspections. Their compact design and high flexibility enable them to operate in narrow and cluttered environments, making them more effective than human intervention or other kinds of robots. However, given that it is not possible to know the environment in which the snake robot will operate in advance, there is a crucial challenge in designing a control strategy adaptable to different contexts and situations. To overcome the problem, it is necessary to work on middle-level control where abstract high-level commands are translated into concrete motor commands. Notwithstanding the recent research advancements, current control strategies lack an active adaptation of the locomotion to face unforeseen obstacles. With the aim to fill this gap, the study leverages the generative properties of Artificial Intelligence (AI) to develop a middle-level control strategy that generates an optimal motion of the robot by proactively anticipating and accounting for obstacle traversal. Though, a simplified approach is considered in this thesis to account for task complexity and lack of prior research. Through an experimental methodology, reinforcement learning is employed to train a neural network that controls the snake's joints by receiving as input proprioceptive information. First, Deep Deterministic Policy Gradient (DDPG) and Proximal Policy Optimization (PPO) are proposed as algorithms to train the network on learning locomotion from scratch; Then, PPO is also tested to generate locomotion while providing a reference motion. The three approaches are discussed, and PPO with reference motion emerges as the most promising. Despite the numerous trainings of the network, the forward motion remains limited, and no satisfactory locomotion has been achieved. However, the research poses itself as a first attempt to develop a proactive control strategy for snake robots and identifies further avenues of research in the field.
I serpenti robot senza ruote hanno dimostrato essere una soluzione promettente per le operazioni di ricercar e soccorso e per le ispezioni industriali. Il design compatto e l’elevata adattabilità consentono loro di operare in ambienti stretti e angusti, rendendoli più efficaci dell’intervento umano o di altri robot. Tuttavia, siccome non è possibile conoscere in anticipo l’ambiente in cui opererà il robot serpente, progettare una strategia di controllo che si adatta a diversi contesti e situazioni è una sfida rilevante. Per superare il problema, è necessario lavorare sul controllo di medio livello, dove i comandi astratti di alto livello vengono convertiti in comandi concreti per i motori. Nonostante i progressi della ricerca, le strategie di controllo attuali mancano di un adattamento proattivo della locomozione per affrontare ostacoli imprevisti. Con l’obiettivo di colmare questa lacuna, lo studio sfrutta le proprietà generative dell’intelligenza artificiale per sviluppare una strategia di controllo di medio livello capace di generare un movimento ottimale del robot anticipando e tenendo conto in modo proattivo dell'attraversamento degli ostacoli. Tuttavia, in questa tesi si considera un approccio semplificato per tenere conto della complessità del compito e della mancanza di ricerche precedenti. Attraverso una metodologia sperimentale, l'apprendimento per rinforzo viene impiegato per addestrare una rete neurale che controlla le articolazioni del serpente ricevendo in ingresso informazioni propriocettive. In primo luogo, vengono proposti Deep Deterministic Policy Gradient (DDPG) e Proximal Policy Optimization (PPO) come algoritmi per addestrare la rete ad apprendere la locomozione da zero; poi, viene testato anche PPO per generare la locomozione fornendo un movimento di riferimento. I tre approcci vengono discussi e il PPO con moto di riferimento emerge come il più promettente. Nonostante i numerosi addestramenti, l’avanzamento rimane limitato e non è stata ottenuta una locomozione soddisfacente. Tuttavia, la ricerca si pone come un primo tentativo di sviluppare una strategia di controllo proattivo per serpenti robot e identifica ulteriori percorsi di ricerca nel campo.
Exploring the potential of reinforcement learning for controlling a wheel-less snake robot
BIFFI, GIACOMO
2021/2022
Abstract
Wheel-less snake robots have proved to be a promising solution for search and rescue operations or industrial inspections. Their compact design and high flexibility enable them to operate in narrow and cluttered environments, making them more effective than human intervention or other kinds of robots. However, given that it is not possible to know the environment in which the snake robot will operate in advance, there is a crucial challenge in designing a control strategy adaptable to different contexts and situations. To overcome the problem, it is necessary to work on middle-level control where abstract high-level commands are translated into concrete motor commands. Notwithstanding the recent research advancements, current control strategies lack an active adaptation of the locomotion to face unforeseen obstacles. With the aim to fill this gap, the study leverages the generative properties of Artificial Intelligence (AI) to develop a middle-level control strategy that generates an optimal motion of the robot by proactively anticipating and accounting for obstacle traversal. Though, a simplified approach is considered in this thesis to account for task complexity and lack of prior research. Through an experimental methodology, reinforcement learning is employed to train a neural network that controls the snake's joints by receiving as input proprioceptive information. First, Deep Deterministic Policy Gradient (DDPG) and Proximal Policy Optimization (PPO) are proposed as algorithms to train the network on learning locomotion from scratch; Then, PPO is also tested to generate locomotion while providing a reference motion. The three approaches are discussed, and PPO with reference motion emerges as the most promising. Despite the numerous trainings of the network, the forward motion remains limited, and no satisfactory locomotion has been achieved. However, the research poses itself as a first attempt to develop a proactive control strategy for snake robots and identifies further avenues of research in the field.File | Dimensione | Formato | |
---|---|---|---|
2023_05_Biffi_Tesi_01.pdf
non accessibile
Descrizione: testo della tesi
Dimensione
3.67 MB
Formato
Adobe PDF
|
3.67 MB | Adobe PDF | Visualizza/Apri |
2023_05_Biffi_Executive Summary_02.pdf
non accessibile
Descrizione: testo executive summary
Dimensione
854.56 kB
Formato
Adobe PDF
|
854.56 kB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/212397