Nowadays mobile robots have been demonstrated capable of high levels of autonomy allowing their employment in a wide range of missions. In order to enhance the performance of intelligent robots a lot of studies focused on the development of autonomous navigation algorithms. The ability to perform navigation along with the possibility to carry out exploration autonomously i.e., construct the map of the surrounding unknown environment while moving on safe obstacle free trajectories, represents one of the most important skills for mobile robots. This capability enables the employment of these systems in complex, inaccessible and dangerous environments in order to perform autonomously a generic task in such conditions. Early algorithms, in particular next-best-view and frontier-based algorithms, make use of methods that are not based on artificial intelligence techniques. These methods, in order to construct the map of the environment, solve the problem of autonomous exploration by searching for the next position where to perform sensor readings exploiting the expansion of exploring random trees within the environment. Despite their success, classic autonomous navigation algorithms have shown limitations regarding their deployment from simulation to real environments. As a consequence, late studies focused on the application of Deep Neural Networks to Reinforcement Learning algorithms, modeling the autonomous exploration problem as a Partial Observable Markov Decision Process (POMDP). These methods, known as Deep Reinforcement Learning (DRL) algorithms, exploit the strong feature extraction capability of neural networks to map sensor data into control policies. The aim of this dissertation is to perform a preliminary design of a Deep Reinforcement Learning-based autonomous decision making algorithm for the exploration of a 2D environment. The proposed architecture is based on the Deep Q-Network algorithm developed by the DeepMind team. The algorithm has been validated on the training map altering the dimensions of the obstacles in order to assess the sensitivity to small environmental changes. This test has shown good adaptability of the proposed algorithm which allows for further development aiming to achieve better generalization.
Al giorno d'oggi esistono sistemi robotici mobili, caratterizzati da alti livelli di autonomia, impiegati in un'ampia varietà di settori. Al fine di migliorare le prestazioni di questi sistemi intelligenti, molti ricercatori si sono concentrati sullo sviluppo di algoritmi di navigazione autonoma. La possibilità di muoversi autonomamente coadiuvata con la capacità di effettuare l'esplorazione di un ambiente sconosciuto in modo autonomo, ovvero di costruire una mappa dell'ambiente circostante mentre il sistema si muove su traiettorie sicure e prive di ostacoli, rappresentano una delle competenze più importanti per robot mobili. Questa capacità rende possibile l'utilizzo del sistema in ambienti complessi, inaccessibili e pericolosi consentendogli di compiere in modo autonomo una generica missione in tali circostanze. I primi algoritmi sviluppati, in particolare i metodi 'next-best-view' e gli algoritmi 'frontier-based', utilizzano soluzioni che non sono basate su tecniche di intelligenza artificiale. Questi metodi, al fine di mappare l'ambiente tramite i sensori equipaggiati, risolvono il problema dell'esplorazione autonoma individuando le posizioni dove eseguire le future misurazioni sfruttando tecniche di ricerca basate su strutture geometriche ramificate casuali all'interno dell'ambiente. Nonostante il loro successo, gli algoritmi di navigazione autonoma classici hanno mostrato limitazioni per quanto riguarda la loro applicazione in contesti reali. Di conseguenza, studi più recenti hanno introdotto reti neurali in algoritmi di Reinforcement Learning, i quali modellano il problema dell'esplorazione autonoma come processo decisionale di Markov parzialmente osservabile (POMDP). Questi metodi, conosciuti come algoritmi di Deep Reinforcement Learning (DRL), sfruttano la forte capacità di estrazione di informazioni caratteristica delle reti neurali, utilizzandole come funzioni che mettono in relazione le letture dei sensori con la legge di controllo. Lo scopo di questo lavoro è quello di effettuare una progettazione preliminare di un algoritmo decisionale autonomo, basato su tecniche di Deep Reinforcement Learning, che permetta l'esplorazione di un ambiente bidimensionale. L'architettura proposta sfrutta l'algoritmo 'Deep Q-Network' sviluppato dal team di DeepMind. Il sistema e stato testato sulla stessa mappa su cui è stato effettuando il training alterando in diversi modi le dimensioni degli ostacoli al fine di valutare la sensibilità a piccoli cambiamenti dell'ambiente. Questi test hanno mostrato una buona adattabilità dell'algoritmo rendendo possibili ulteriori sviluppi mirati a raggiungere una migliore generalizzazione.
A deep reinforcement learning-based algorithm for exploration planning
PADOVANI, MATTEO
2019/2020
Abstract
Nowadays mobile robots have been demonstrated capable of high levels of autonomy allowing their employment in a wide range of missions. In order to enhance the performance of intelligent robots a lot of studies focused on the development of autonomous navigation algorithms. The ability to perform navigation along with the possibility to carry out exploration autonomously i.e., construct the map of the surrounding unknown environment while moving on safe obstacle free trajectories, represents one of the most important skills for mobile robots. This capability enables the employment of these systems in complex, inaccessible and dangerous environments in order to perform autonomously a generic task in such conditions. Early algorithms, in particular next-best-view and frontier-based algorithms, make use of methods that are not based on artificial intelligence techniques. These methods, in order to construct the map of the environment, solve the problem of autonomous exploration by searching for the next position where to perform sensor readings exploiting the expansion of exploring random trees within the environment. Despite their success, classic autonomous navigation algorithms have shown limitations regarding their deployment from simulation to real environments. As a consequence, late studies focused on the application of Deep Neural Networks to Reinforcement Learning algorithms, modeling the autonomous exploration problem as a Partial Observable Markov Decision Process (POMDP). These methods, known as Deep Reinforcement Learning (DRL) algorithms, exploit the strong feature extraction capability of neural networks to map sensor data into control policies. The aim of this dissertation is to perform a preliminary design of a Deep Reinforcement Learning-based autonomous decision making algorithm for the exploration of a 2D environment. The proposed architecture is based on the Deep Q-Network algorithm developed by the DeepMind team. The algorithm has been validated on the training map altering the dimensions of the obstacles in order to assess the sensitivity to small environmental changes. This test has shown good adaptability of the proposed algorithm which allows for further development aiming to achieve better generalization.File | Dimensione | Formato | |
---|---|---|---|
Matteo_padovani_Msc_Thesis.pdf
Open Access dal 08/04/2022
Dimensione
5.13 MB
Formato
Adobe PDF
|
5.13 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/173789