Deep Reinforcement Learning (DRL) has transformed complex decision-making processes by enabling agents to learn sophisticated strategies by interacting with the environment. Despite recent breakthroughs, DRL methods have some challenging limitations preventing them from being adopted to solve real-world industrial problems. To close the gap between theoretical advancements and successful deployments in practical settings, we present a decentralized multi-agent approach to address the industrial Pick&Place problem, leveraging DRL methods and Monte Carlo Tree Search for planning. The first step in our work is the development, optimization, and validation of a simulator that models the dynamics of a real plant. We then formulate the Pick&Place problem as a Markov game. Our solution to this setting, AlphaPick, is an AlphaZero-like architecture that can learn in cooperative multi-agent settings. We design a training pipeline composed of two steps. The first step consists of training a neural network with supervised learning to imitate an expert strategy. This network is used to bootstrap the learning of the agents in the second step, which consists of an iterative training process that decentralizes the learning with AlphaPick one agent at a time. Despite the margin for improvement over the baseline strategy being small, the agents we trained managed to replicate its performance, showing promising prospects for further deployment in more complex settings. Finally, our work also shows the importance of planning in AlphaZero-like architectures.
Il Deep Reinforcement Learning (DRL) ha trasformato i processi decisionali complessi, permettendo agli agenti di imparare strategie sofisticate interagendo con l’ambiente. Nonostante i recenti progressi, i metodi DRL presentano alcune limitazioni che ne impediscono l’adozione per risolvere problemi industriali nel mondo reale. Per colmare il divario tra i progressi teorici e le implementazioni di successo in contesti pratici, presentiamo un approccio multi-agente decentralizzato per affrontare il problema industriale di Pick&Place, sfruttando metodi DRL e la ricerca ad albero Monte Carlo per la pianificazione. Il primo passo nel nostro lavoro consiste nello sviluppo, ottimizzazione e validazione di un simulatore che modelli la dinamica di un impianto reale. Successivamente, formuliamo il problema di Pick&Place come un gioco Markoviano. La nostra soluzione a questo problema, chiamata AlphaPick, è un’architettura simile ad AlphaZero che può apprendere in ambienti multi-agente cooperativi. Abbiamo progettato una pipeline di addestramento composta da due fasi. La prima fase consiste nell’addestrare una rete neurale con apprendimento supervisionato per imitare una strategia esperta. Questa rete viene utilizzata per velocizzare l’apprendimento degli agenti nella seconda fase, che consiste in un processo di addestramento iterativo che decentralizza l’apprendimento con AlphaPick, un agente alla volta. Nonostante il margine per il miglioramento rispetto alla strategia di base è limitato, gli agenti che abbiamo addestrato sono riusciti a replicare le stesse performance, mostrando promettenti prospettive per un ulteriore impiego in contesti più complessi. Per concludere, il nostro lavoro evidenzia anche l’importanza della pianificazione nelle architetture simili ad AlphaZero.
A decentralized multi-agent approach to industrial planning using Monte Carlo tree search and Deep Reinforcement Learning
Tenedini, Davide
2023/2024
Abstract
Deep Reinforcement Learning (DRL) has transformed complex decision-making processes by enabling agents to learn sophisticated strategies by interacting with the environment. Despite recent breakthroughs, DRL methods have some challenging limitations preventing them from being adopted to solve real-world industrial problems. To close the gap between theoretical advancements and successful deployments in practical settings, we present a decentralized multi-agent approach to address the industrial Pick&Place problem, leveraging DRL methods and Monte Carlo Tree Search for planning. The first step in our work is the development, optimization, and validation of a simulator that models the dynamics of a real plant. We then formulate the Pick&Place problem as a Markov game. Our solution to this setting, AlphaPick, is an AlphaZero-like architecture that can learn in cooperative multi-agent settings. We design a training pipeline composed of two steps. The first step consists of training a neural network with supervised learning to imitate an expert strategy. This network is used to bootstrap the learning of the agents in the second step, which consists of an iterative training process that decentralizes the learning with AlphaPick one agent at a time. Despite the margin for improvement over the baseline strategy being small, the agents we trained managed to replicate its performance, showing promising prospects for further deployment in more complex settings. Finally, our work also shows the importance of planning in AlphaZero-like architectures.File | Dimensione | Formato | |
---|---|---|---|
2024_12_Executive_Summary_02.pdf
non accessibile
Dimensione
543.4 kB
Formato
Adobe PDF
|
543.4 kB | Adobe PDF | Visualizza/Apri |
2024_12_Tenedini_Thesis_01.pdf
non accessibile
Dimensione
1.85 MB
Formato
Adobe PDF
|
1.85 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/231162