Markov Decision Processes (MDP) is a fundamental framework to model Sequential Decision Making problems. In this context, a learning agent interacts with an environment in order to reach a, potentially long-term, goal. Solving an MDP means to find a behaviour for the agent that leads to optimal performance, while a learning algorithm is a method to solve an MDP starting from an initial behaviour. The performance of the agent is determined by the feedback provided by the environment, in the form of a reward signal. In the traditional formulation, the environment is fixed and stationary and out of the control of the agent. However, in many real-world problems, there is the possibility to configure, to a limited extent, some environmental parameters to improve the performance of the learning agent. This Thesis proposes a novel framework, Configurable Markov Decision Processes (CMDP), extending the traditional MDP framework, in order to meet the need for modelling those real-world scenarios. Moreover, we provide a new learning algorithm, Safe Policy Model Iteration (SPMI), to deal with a learning problem in the CMDP context. To this purpose, SPMI jointly and adaptively optimizes the agent behaviour and the environment configuration, while guaranteeing a monotonic improvement of the performance. After having introduced our approach and derived some related theoretical results, we develop two prototypical domains representing CMDP scenarios. Then, we present the experimental evaluation on these domains to show the benefits of the environment configurability on the performance of the learned policy, and to highlight the effectiveness of our approach in solving a CMDP.
I Processi Decisionali di Markov (MDP) sono una struttura fondamentale per modellare Processi Decisionali Sequenziali. In questo contesto, un agente interagisce con un ambiente per raggiungere un obiettivo a lungo termine. Risolvere un MDP significa trovare una strategia di interazione con cui l'agente riesce ad ottenere una performance ottima. Un algoritmo di apprendimento è un metodo per risolvere un MDP partendo da una, arbitraria, strategia iniziale. La performance dell'agente è determinata dalla risposta dell'ambiente, sotto forma di segnale di rinforzo, alle sue interazioni. Nella formulazione tradizionale l'ambiente è fisso e stazionario, fuori dal controllo dell'agente. Eppure, in molti casi reali, esiste la possibilità di configurare, almeno parzialmente, alcuni parametri ambientali, con il fine di migliorare la performance dell'agente. Questa Tesi propone una nuova struttura, chiamata Processi Decisionali di Markov Configurabili (CMDP), che estende la formulazione tradizionale degli MDP per modellare questo tipo di scenari. Inoltre, presentiamo un nuovo algoritmo di apprendimento, Safe Policy Model Iteration (SPMI), per risolvere un problema di apprendimento nel contesto degli CMDP. Questo algoritmo ottimizza, con approccio congiunto e adattivo, la strategia dell'agente e la configurazione dell'ambiente, ottenendo al contempo una crescita monotona della performance. Dopo aver introdotto il nostro approccio di apprendimento e derivato i relativi risultati teorici, descriviamo due domini illustrativi, opportunamente sviluppati per rappresentare il contesto degli CMDP. Infine, presentiamo un'analisi sperimentale sulla base di questi domini, per mettere in luce il beneficio derivante dalla configurabilità del modello sulla performance dell'agente e per mostrare l'efficacia del nostro approccio di apprendimento nel risolvere degli CMDP.
Configurable Markov decision processes
MUTTI, MIRCO
2016/2017
Abstract
Markov Decision Processes (MDP) is a fundamental framework to model Sequential Decision Making problems. In this context, a learning agent interacts with an environment in order to reach a, potentially long-term, goal. Solving an MDP means to find a behaviour for the agent that leads to optimal performance, while a learning algorithm is a method to solve an MDP starting from an initial behaviour. The performance of the agent is determined by the feedback provided by the environment, in the form of a reward signal. In the traditional formulation, the environment is fixed and stationary and out of the control of the agent. However, in many real-world problems, there is the possibility to configure, to a limited extent, some environmental parameters to improve the performance of the learning agent. This Thesis proposes a novel framework, Configurable Markov Decision Processes (CMDP), extending the traditional MDP framework, in order to meet the need for modelling those real-world scenarios. Moreover, we provide a new learning algorithm, Safe Policy Model Iteration (SPMI), to deal with a learning problem in the CMDP context. To this purpose, SPMI jointly and adaptively optimizes the agent behaviour and the environment configuration, while guaranteeing a monotonic improvement of the performance. After having introduced our approach and derived some related theoretical results, we develop two prototypical domains representing CMDP scenarios. Then, we present the experimental evaluation on these domains to show the benefits of the environment configurability on the performance of the learned policy, and to highlight the effectiveness of our approach in solving a CMDP.File | Dimensione | Formato | |
---|---|---|---|
2018_04_Mutti.pdf
accessibile in internet per tutti
Descrizione: Testo della Tesi
Dimensione
1.72 MB
Formato
Adobe PDF
|
1.72 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/141140