This Research Thesis was created with the aim of deepening the study of the problem of delayed interaction between agent and environment, in the context of reinforcement learning. Reinforcement learning is perhaps one of the most general paradigms of machine learning, it is able to model a very wide range of situations in which an autonomous agent interacts with the environment to complete a task. This involves considerable problems of complexity in the training phase which, as we will see, are exacerbated in the case of delays. Therefore, we propose to develop an algorithm capable of obtaining the best possible performance while mitigating the cost in terms of interactions with the environment. To do this, we will bring the problem of learning with a delay in the interaction back to the case without delay, by means of imitation learning. After the definition of the algorithm, we will present an in-depth theoretical study which will culminate in the demonstration of some theorems that guarantee the performance of our algorithm under appropriate hypotheses. The validity of our method will finally be highlighted by some experiments on the most common simulated environments that are particularly susceptible to delays. The results will show that our algorithm achieves results superior to the state of the art of this discipline even with a significantly lower number of interactions with the environment. Finally, we conclude this thesis with some reflections on the possible developments of this new approach.
Questa Tesi di Ricerca nasce con l’obiettivo di approfondire lo studio del problema dell’interazione ritardata tra agente ed ambiente, nell’ambito dell’apprendimento per rinforzo. L’apprendimento per rinforzo è forse uno dei paradigmi più generali dell’apprendimento automatico, esso è in grado di modellizzare una vastissima gamma di situazioni in cui un agente autonomo interagisce con l’ambiente per portare a termine un compito. Questo porta con sè notevoli problemi di complessità nella fase di allenamento che, come vedremo, vengono esacerbati nel caso in cui l’osservazione degli stati e l’esecuzione delle azioni siano soggette ad un ritardo. Pertanto, quello che ci si propone è di sviluppare un algoritmo in grado di ottenere la miglior prestazione possibile mitigando il costo in termini di interazioni con l’ambiente. Per fare ciò, si ricondurrà il problema dell’apprendimento con un ritardo di interazione al caso senza ritardo, per mezzo dell’apprendimento per imitazione. Alla definizione dell’algoritmo seguirà un approfondito studio teorico che culminerà nella dimostrazione di alcuni teoremi che, sotto opportune ipotesi, vanno a garantirne il funzionamento. La validità del nostro metodo sarà infine comprovata da alcuni esperimenti sui più comuni ambienti simulati che sono particolarmente suscettibili al problema del ritardo. I risultati mostreranno che il nostro algoritmo ottiene risultati superiori allo stato dell’arte di questa disciplina pur con una richiesta di interazioni con l’ambiente notevolmente minore. Infine, chiuderemo questa tesi con alcune riflessioni sui possibili sviluppi di questo nuovo approccio.
Delayed reinforcement learning, an imitation game
MARAN, DAVIDE
2020/2021
Abstract
This Research Thesis was created with the aim of deepening the study of the problem of delayed interaction between agent and environment, in the context of reinforcement learning. Reinforcement learning is perhaps one of the most general paradigms of machine learning, it is able to model a very wide range of situations in which an autonomous agent interacts with the environment to complete a task. This involves considerable problems of complexity in the training phase which, as we will see, are exacerbated in the case of delays. Therefore, we propose to develop an algorithm capable of obtaining the best possible performance while mitigating the cost in terms of interactions with the environment. To do this, we will bring the problem of learning with a delay in the interaction back to the case without delay, by means of imitation learning. After the definition of the algorithm, we will present an in-depth theoretical study which will culminate in the demonstration of some theorems that guarantee the performance of our algorithm under appropriate hypotheses. The validity of our method will finally be highlighted by some experiments on the most common simulated environments that are particularly susceptible to delays. The results will show that our algorithm achieves results superior to the state of the art of this discipline even with a significantly lower number of interactions with the environment. Finally, we conclude this thesis with some reflections on the possible developments of this new approach.File | Dimensione | Formato | |
---|---|---|---|
Maran_thesis_delays.pdf
accessibile in internet per tutti
Descrizione: Thesis
Dimensione
4.25 MB
Formato
Adobe PDF
|
4.25 MB | Adobe PDF | Visualizza/Apri |
Maran_executive_summary.pdf
accessibile in internet per tutti
Descrizione: Extended summary
Dimensione
858.75 kB
Formato
Adobe PDF
|
858.75 kB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/183532