This thesis explores the application of Reinforcement Learning (RL) for optimal control in closed-loop systems, specifically targeting the quadruple tank system. Reinforcement Learning enables agents to learn optimal control strategies through interaction with their environment, implemented and analyzed here in the form of Deep Deterministic Policy Gradient (DDPG) and Deep Q Function Value Iteration (DQVI) algorithms. These model-free methods are compared against the traditional Proportional-Integral-Derivative (PID) control method, showcasing improvements in dynamic adaptability, precision, and robustness. The study involves modeling the quadruple tank system and implementing the RL algorithms to evaluate their effectiveness. By examining the architecture of neural networks, tuning parameters and training methodologies, this work highlights the strengths and limitations of DDPG and DQVI algorithms. Experimental results demonstrate that both RL-based methods outperform the PID controller in achieving stable, accurate control responses. The DDPG algorithm, in particular, offers a significant advantage in training efficiency, making it well-suited for real-time deployment scenarios. Additionally, a hybrid RL-PID approach is proposed, combining the reliability of PID control with the adaptability of RL. This hybrid framework enhances control of nonlinear systems like the quadruple tank, ensuring stability while leveraging RL’s ability to manage complex dynamics. The thesis concludes with recommendations for future work, emphasizing the need for real-world training of RL agents to overcome limitations observed in simulations. Such advancements could position RL-based controllers as practical alternatives to traditional methods in industrial applications, paving the way for more resilient and adaptive control solutions.
Questa tesi esplora l'applicazione del Reinforcement Learning (RL) per il controllo ottimale nei sistemi ad anello chiuso, con particolare attenzione al sistema a quattro serbatoi. Il Reinforcement Learning consente agli agenti di apprendere strategie di controllo ottimali attraverso l'interazione con l'ambiente, qui implementate e analizzate sotto forma degli algoritmi Deep Deterministic Policy Gradient (DDPG) e Deep Q Function Value Iteration (DQVI). Questi metodi, che non richiedono un modello del sistema, sono confrontati con il tradizionale metodo di controllo Proporzionale-Integrale-Derivativo (PID), mostrando miglioramenti in termini di adattabilità, precisione e robustezza. Lo studio include la modellizzazione del sistema a quattro serbatoi e l'implementazione degli algoritmi RL per valutarne l'efficacia. Attraverso l'analisi dell'architettura delle reti neurali, della calibrazione dei parametri e delle metodologie di addestramento, il lavoro evidenzia i punti di forza e i limiti degli algoritmi DDPG e DQVI. I risultati sperimentali dimostrano che entrambi i metodi basati su RL superano il controllore PID nell'ottenere risposte stabili e accurate. L'algoritmo DDPG, in particolare, offre un vantaggio significativo in termini di efficienza di addestramento, rendendolo adatto a scenari di implementazione in tempo reale. Inoltre, viene proposto un approccio ibrido RL-PID, che combina l'affidabilità del controllore PID con l'adattabilità del RL. Questa struttura ibrida migliora il controllo di sistemi non lineari come il sistema a quattro serbatoi, garantendo stabilità e sfruttando la capacità del RL di gestire dinamiche complesse. La tesi si conclude con raccomandazioni per futuri sviluppi, sottolineando l'importanza dell'addestramento degli agenti RL nel mondo reale per superare le limitazioni osservate nelle simulazioni. Tali progressi potrebbero rendere i controllori basati su RL alternative pratiche ai metodi tradizionali nelle applicazioni industriali, aprendo la strada a soluzioni di controllo più resilienti e adattive.
Reinforcement learning-based optimal control of dynamic systems
Colella, Ivan
2023/2024
Abstract
This thesis explores the application of Reinforcement Learning (RL) for optimal control in closed-loop systems, specifically targeting the quadruple tank system. Reinforcement Learning enables agents to learn optimal control strategies through interaction with their environment, implemented and analyzed here in the form of Deep Deterministic Policy Gradient (DDPG) and Deep Q Function Value Iteration (DQVI) algorithms. These model-free methods are compared against the traditional Proportional-Integral-Derivative (PID) control method, showcasing improvements in dynamic adaptability, precision, and robustness. The study involves modeling the quadruple tank system and implementing the RL algorithms to evaluate their effectiveness. By examining the architecture of neural networks, tuning parameters and training methodologies, this work highlights the strengths and limitations of DDPG and DQVI algorithms. Experimental results demonstrate that both RL-based methods outperform the PID controller in achieving stable, accurate control responses. The DDPG algorithm, in particular, offers a significant advantage in training efficiency, making it well-suited for real-time deployment scenarios. Additionally, a hybrid RL-PID approach is proposed, combining the reliability of PID control with the adaptability of RL. This hybrid framework enhances control of nonlinear systems like the quadruple tank, ensuring stability while leveraging RL’s ability to manage complex dynamics. The thesis concludes with recommendations for future work, emphasizing the need for real-world training of RL agents to overcome limitations observed in simulations. Such advancements could position RL-based controllers as practical alternatives to traditional methods in industrial applications, paving the way for more resilient and adaptive control solutions.File | Dimensione | Formato | |
---|---|---|---|
2024_12_Colella_Tesi.pdf
accessibile in internet solo dagli utenti autorizzati
Descrizione: File della tesi
Dimensione
2.47 MB
Formato
Adobe PDF
|
2.47 MB | Adobe PDF | Visualizza/Apri |
2024_12_Colella_Executive_Summary.pdf
accessibile in internet solo dagli utenti autorizzati
Descrizione: File dell'executive summary
Dimensione
1.03 MB
Formato
Adobe PDF
|
1.03 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/229951