Reinforcement Learning methods, when applied to real-world control applications, become suboptimal when gradual changes in system dynamics occur due to deterioration of components, changes in operating conditions, and other factors. Classical Reinforcement Learning methods assume stationarity of the system, modeling the transition probability function as time-independent. We propose a new modeling framework for a stochastic dynamical system characterized by regular non-stationarity, reflecting the gradual evolution of dynamics and providing a correct formulation of this real-world control problem. Specifically, we model the stochastic dynamical system as a Regular Non-Stationary MDP, which explicitly captures the regular evolution of the dynamics using a Lipschitz bound on the divergence of the evolving transition probability functions. Based on this proposed modeling of the problem, we introduce an algorithmic solution that exploits this formulation to maintain optimality. The proposed solution is an actor-only, off-policy, policy gradient method. It employs retrainings to periodically adapt the control policy to remain optimal under evolving dynamics. The proposed solution is based on Behavioral Policy Optimization, a technique that searches for the optimal behavioral policy to perform updates of the target policy, in order to sample trajectories containing states where the current target policy is suboptimal and update it accordingly. It incorporates a regulation mechanism to tune the trade-off between exploration and exploration cost during retraining. The proposed algorithmic solution is analyzed empirically through extensive tests on benchmark environments adapted to exhibit this type of non-stationarity.
I metodi di Reinforcement Learning, quando applicati a sistemi di controllo reali, possono diventare subottimali in presenza di variazioni graduali della dinamica del sistema dovute al deterioramento dei componenti, a cambiamenti nelle condizioni operative o ad altri fattori. I metodi classici di Reinforcement Learning assumono infatti la stazionarietà del sistema, modellando la funzione di probabilità di transizione come indipendente dal tempo. In questo lavoro proponiamo un nuovo framework di modellazione per sistemi dinamici stocastici caratterizzati da non-stazionarietà regolare, che riflette l’evoluzione graduale della dinamica e fornisce una formulazione più adeguata del problema di controllo in contesti reali. In particolare, modelliamo il sistema dinamico stocastico come un Regular Non-Stationary MDP, che cattura esplicitamente l’evoluzione regolare della dinamica mediante un vincolo di Lipschitz sulla divergenza tra le funzioni di probabilità di transizione nel tempo. Sulla base di questa formulazione, introduciamo una soluzione algoritmica che sfrutta tale struttura per mantenere l’ottimalità della politica di controllo. La soluzione proposta è un metodo actor-only, off-policy, basato su policy gradient, che utilizza retraining periodici per adattare la politica di controllo all’evoluzione della dinamica del sistema. L’approccio si fonda sulla Behavioral Policy Optimization, una tecnica che ricerca la politica comportamentale ottimale per aggiornare la politica target, al fine di campionare traiettorie contenenti stati in cui la politica corrente risulta subottimale e aggiornarla di conseguenza. Il metodo integra inoltre un meccanismo di regolazione che consente di bilanciare il compromesso tra esplorazione e costo dell’esplorazione durante le fasi di retraining.
Reinforcement learning for adaptive control in non-stationary settings
Zappia, Sara
2024/2025
Abstract
Reinforcement Learning methods, when applied to real-world control applications, become suboptimal when gradual changes in system dynamics occur due to deterioration of components, changes in operating conditions, and other factors. Classical Reinforcement Learning methods assume stationarity of the system, modeling the transition probability function as time-independent. We propose a new modeling framework for a stochastic dynamical system characterized by regular non-stationarity, reflecting the gradual evolution of dynamics and providing a correct formulation of this real-world control problem. Specifically, we model the stochastic dynamical system as a Regular Non-Stationary MDP, which explicitly captures the regular evolution of the dynamics using a Lipschitz bound on the divergence of the evolving transition probability functions. Based on this proposed modeling of the problem, we introduce an algorithmic solution that exploits this formulation to maintain optimality. The proposed solution is an actor-only, off-policy, policy gradient method. It employs retrainings to periodically adapt the control policy to remain optimal under evolving dynamics. The proposed solution is based on Behavioral Policy Optimization, a technique that searches for the optimal behavioral policy to perform updates of the target policy, in order to sample trajectories containing states where the current target policy is suboptimal and update it accordingly. It incorporates a regulation mechanism to tune the trade-off between exploration and exploration cost during retraining. The proposed algorithmic solution is analyzed empirically through extensive tests on benchmark environments adapted to exhibit this type of non-stationarity.| File | Dimensione | Formato | |
|---|---|---|---|
|
2026_03_Zappia_Tesi.pdf
solo utenti autorizzati a partire dal 02/03/2027
Descrizione: Tesi
Dimensione
1.4 MB
Formato
Adobe PDF
|
1.4 MB | Adobe PDF | Visualizza/Apri |
|
2026_03_Zappia_ExecutiveSummary.pdf
non accessibile
Descrizione: Executive Summary
Dimensione
718.51 kB
Formato
Adobe PDF
|
718.51 kB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/253629