Reinforcement Learning (RL) algorithms often require a large number of interactions with the environment, which may be impractical or costly in real-world scenarios. Model-Based Reinforcement Learning (MBRL) mitigates this limitation by learning a model of the environment to simulate interactions and reduce reliance on real-world data. The effectiveness of this approach depends on the model accuracy, particularly in long-horizon planning where prediction errors accumulate. This work introduces a novel MBRL algorithm built upon the Sparse Identification of Nonlinear Dynamics (SINDy) framework, which incorporates uncertainty estimation over model parameters to guide efficient exploration. A theoretical analysis, applicable to a wide class of systems with sub-Gaussian noise, establishes regret bounds that scale with the sparsity of the learned model, even under approximate dynamics. Due to the computational inefficiency of the theoretical solution, a practical Dyna-style variant, named Optimistic Simulation with Confidence-bAll Regression (OSCAR), is proposed. This algorithm performs parallel simulations using models sampled from the confidence ball built around the SINDy parameters, enabling uncertainty-driven exploration. Combining this form of guided exploration with the Soft Actor-Critic (SAC) algorithm, the agent is able to consistently achieves optimal performance, even in environments where SAC alone struggles to learn or completely fails. The proposed method preserves the theoretical guarantees while reducing computational overhead, and achieves improved performance in terms of the number of interactions with the environment and policy performance compared to classical Dyna-style MBRL approaches with SAC.
Gli algoritmi di apprendimento per rinforzo (RL) richiedono spesso un numero elevato di interazioni con l'ambiente, il che può risultare poco pratico o costoso in scenari reali. L'apprendimento per rinforzo basato su modelli (MBRL) mitiga questa limitazione apprendendo un modello dell'ambiente per simulare le interazioni e ridurre la dipendenza dai dati reali. L'efficacia di questo approccio dipende dall'accuratezza del modello, in particolare nella pianificazione a lungo termine, dove gli errori di previsione si accumulano. Questo lavoro introduce un nuovo algoritmo MBRL basato sul framework Sparse Identification of Nonlinear Dynamics (SINDy), che incorpora la stima dell'incertezza sui parametri del modello per guidare un'esplorazione efficiente. Un'analisi teorica, applicabile a un'ampia classe di sistemi con rumore sub-gaussiano, stabilisce dei limiti di regret che variano in base alla sparsità del modello appreso, anche in presenza di dinamiche approssimative. A causa dell'inefficienza computazionale della soluzione teorica, viene proposta una variante pratica in stile Dyna, denominata Optimistic Simulation with Confidence-bAll Regression (OSCAR). Questo algoritmo esegue simulazioni parallele utilizzando modelli campionati da una sfera di confidenza costruita attorno ai parametri SINDy, consentendo un'esplorazione guidata dall'incertezza. Combinando questa forma di esplorazione guidata con l'algoritmo Soft Actor-Critic (SAC), l'agente è in grado di ottenere prestazioni ottimali, anche in ambienti in cui il solo SAC fatica ad apprendere o fallisce completamente. Il metodo proposto preserva le garanzie teoriche riducendo al contempo il sovraccarico computazionale ottenendo prestazioni migliori in termini di numero di interazioni con l'ambiente e prestazioni della politicha rispetto ai classici approcci MBRL in stile Dyna con SAC.
Uncertainty-driven exploration in sparse model-based reinforcement learning
Gusmeroli, Enea
2024/2025
Abstract
Reinforcement Learning (RL) algorithms often require a large number of interactions with the environment, which may be impractical or costly in real-world scenarios. Model-Based Reinforcement Learning (MBRL) mitigates this limitation by learning a model of the environment to simulate interactions and reduce reliance on real-world data. The effectiveness of this approach depends on the model accuracy, particularly in long-horizon planning where prediction errors accumulate. This work introduces a novel MBRL algorithm built upon the Sparse Identification of Nonlinear Dynamics (SINDy) framework, which incorporates uncertainty estimation over model parameters to guide efficient exploration. A theoretical analysis, applicable to a wide class of systems with sub-Gaussian noise, establishes regret bounds that scale with the sparsity of the learned model, even under approximate dynamics. Due to the computational inefficiency of the theoretical solution, a practical Dyna-style variant, named Optimistic Simulation with Confidence-bAll Regression (OSCAR), is proposed. This algorithm performs parallel simulations using models sampled from the confidence ball built around the SINDy parameters, enabling uncertainty-driven exploration. Combining this form of guided exploration with the Soft Actor-Critic (SAC) algorithm, the agent is able to consistently achieves optimal performance, even in environments where SAC alone struggles to learn or completely fails. The proposed method preserves the theoretical guarantees while reducing computational overhead, and achieves improved performance in terms of the number of interactions with the environment and policy performance compared to classical Dyna-style MBRL approaches with SAC.File | Dimensione | Formato | |
---|---|---|---|
2025_07_Gusmeroli_Executive_Summary.pdf
accessibile in internet per tutti
Descrizione: Executive Summary
Dimensione
718.12 kB
Formato
Adobe PDF
|
718.12 kB | Adobe PDF | Visualizza/Apri |
2025_07_Gusmeroli_Thesis.pdf
accessibile in internet per tutti
Descrizione: Thesis
Dimensione
1.11 MB
Formato
Adobe PDF
|
1.11 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/240833