Sample-based algorithms are the best ones to retrieve a solution in a path plan- ning problem. They explore the con guration space of a robot in order to nd a feasible path connecting a start position to an end one, avoiding collisions with the obstacles. A very e cient algorithm that performs this task is the RRT Rapidly- explore Random Tree. The algorithm builds a tree seeded in the start position and explores the con guration space of the robot until the goal position is connected to the tree. The strength of this approach is in the rapidly-exploring property that allows to nd quickly a solution. However, every time a new path is required, the planner must compute a new tree. This thesis addresses this problem by using the previous explorations to provide a prior knowledge for the subsequent searches. The solution proposed is based on a Reinforcement Learning technique that helps the algorithm understand which is the fastest way to connect two con gurations. This Reinforcement Learning technique is supported by an intelligent sampling that helps select the best samples to take into account. This intelligent sampling is also modelled as a Reinforcement Learning process, where the algorithm learns the best policy during the query. Lastly, a Neural Network is trained to describe as good as possible the environment around, in order to recognize the regions where the robot would collide with the obstacles. It is also used to assist the algorithm in the intelligent sampling procedure.
Gli algoritmi Sample-based risultano tra i migliori per risolvere un problema di path planning. Questi esplorano lo spazio di con gurazione di un robot per trovare un possibile percorso che connetta una posizione iniziale a una nale, evitando collisioni con gli ostacoli. Uno degli algoritmi piœ e cienti che si occupa di risolvere questi problemi Ł l’algoritmo RRT Rapidly-exploring Random Tree. Esso costruisce un albero partendo dalla posizione iniziale che esplora lo spazio di con gurazione del robot nchØ la posizione nale non viene collegata all’albero. Il punto di forza di questo approccio risiede nella proprietÆ di rapida esplorazione che permette di trovare una soluzione molto rapidamente. Tuttavia, ogni volta che un nuovo percorso viene richiesto, il piani catore deve ricalcolare un nuovo albero da zero. Questa tesi a ronta questo problema usando le precedenti esplorazioni per fornire una conoscenza pregressa per le successive ricerche. La soluzione proposta Ł basata su una tecnica di Reinforcement Learning che aiuta l’algoritmo a capire quale sia la via piœ veloce per connettere due con gurazioni. Questa tecnica di Reinforcement Learning Ł supportata da un campionamento intelligente che aiuta a selezionare i migliori campioni da prendere in considerazione. Il campionamento intelligente Ł modellato anch’esso come un processo Reinforcement Learning, nel quale l’algoritmo impara la migliore policy per risolvere il problema sottoposto. In ne, una Rete Neu- rale viene allenata per descrivere al meglio possibile l’ambiente intorno, in modo da riconoscere le regioni dove il robot potrebbe andare a collidere con gli ostacoli. Questa viene inoltre usata per aiutare l’algoritmo durante la procedura di campionamento intelligente.
RL-RRT : a sample-based algorithm for multi-query path planning of robotic manipulators
CHERUBIN, ROBERTO
2019/2020
Abstract
Sample-based algorithms are the best ones to retrieve a solution in a path plan- ning problem. They explore the con guration space of a robot in order to nd a feasible path connecting a start position to an end one, avoiding collisions with the obstacles. A very e cient algorithm that performs this task is the RRT Rapidly- explore Random Tree. The algorithm builds a tree seeded in the start position and explores the con guration space of the robot until the goal position is connected to the tree. The strength of this approach is in the rapidly-exploring property that allows to nd quickly a solution. However, every time a new path is required, the planner must compute a new tree. This thesis addresses this problem by using the previous explorations to provide a prior knowledge for the subsequent searches. The solution proposed is based on a Reinforcement Learning technique that helps the algorithm understand which is the fastest way to connect two con gurations. This Reinforcement Learning technique is supported by an intelligent sampling that helps select the best samples to take into account. This intelligent sampling is also modelled as a Reinforcement Learning process, where the algorithm learns the best policy during the query. Lastly, a Neural Network is trained to describe as good as possible the environment around, in order to recognize the regions where the robot would collide with the obstacles. It is also used to assist the algorithm in the intelligent sampling procedure.| File | Dimensione | Formato | |
|---|---|---|---|
|
2020_04_Cherubin.pdf
solo utenti autorizzati dal 05/04/2021
Descrizione: Testo della tesi
Dimensione
6.41 MB
Formato
Adobe PDF
|
6.41 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/153979