Reinforcement Learning (RL) is a subfield of Machine Learning that studies how an agent, by interacting with an environment, can learn actions to execute based on the reward signals it receives. In recent years, research in this field has expanded to dynamic, non-stationary environments, where information no longer arrives in static form but instead flows through heterogeneous, infinite data streams that require real-time processing. In this new scenario, Continual Reinforcement Learning (CRL) has emerged as a new paradigm that unifies classical RL with principles of Continual Learning (CL), whose primary goal is to avoid a phenomenon known as catastrophic forgetting, in which adapting to new concepts induces the agent to forget what it has learned so far. This thesis proposes integrating a CRL-based framework with principles of Streaming Continual Learning, a recent research area that, alongside CL, includes the analysis of agent adaptation over time using Streaming Machine Learning methodologies. As a result, we propose a new unified paradigm, called Streaming Continual Reinforcement Learning. The objective is to face situations in which changes can affect both visual features and the environment’s intrinsic dynamics, leading the agent to redefine, partially or completely, its learning policy. During experiments, we demonstrated that different strategies address those changes in different ways, and we proposed a unified architectural approach suited to the new context.
Il Reinforcement Learning (RL) è una branca del Machine Learning che studia come un agente, interagendo con un ambiente, possa imparare quali azioni eseguire sulla base dei segnali di ricompensa ricevuti. Recentemente, la ricerca in questo campo si è estesa anche ad ambienti dinamici e non stazionari, dove l’informazione non è più statica, ma arriva tramite flussi di dati eterogenei di lunghezza infinita che richiedono di essere processati in tempo reale. In questo nuovo scenario, il Continual Reinforcement Learning (CRL) è emerso come un nuovo paradigma che unifica il classico RL con i princìpi del Continual Learning (CL), il cui scopo principale è quello di evitare un fenomeno conosciuto come oblio catastrofico, nel quale l’adattamento ai cambiamenti porta l’agente a dimenticare ciò che ha imparato in passato. Questa tesi propone l’integrazione di un framework basato sul CRL con principi dello Streaming Continual Learning, un’area di ricerca recente che include, oltre al CL, anche l’analisi dell’adattamento dell’agente nel tempo, facendo uso di metodologie proprie dello Streaming Machine Learning. Di conseguenza, il nuovo paradigma unificato che proponiamo si chiama Streaming Continual Reinforcement Learning. L’obiettivo è quello di gestire situazioni in cui i cambiamenti possono riguardare sia caratteristiche visive sia la dinamica interna dell’ambiente, portando l’agente a dover ridefinire, in parte o totalmente, la sua politica di apprendimento. Durante gli esperimenti, abbiamo mostrato come differenti strategie affrontino questi cambiamenti in maniera diversa e abbiamo proposto l’utilizzo di un approccio architetturale unificato adatto al nuovo contesto.
Towards streaming continual reinforcement learning: challenges and open questions in non-stationary environments
Maccagni, Rebecca
2024/2025
Abstract
Reinforcement Learning (RL) is a subfield of Machine Learning that studies how an agent, by interacting with an environment, can learn actions to execute based on the reward signals it receives. In recent years, research in this field has expanded to dynamic, non-stationary environments, where information no longer arrives in static form but instead flows through heterogeneous, infinite data streams that require real-time processing. In this new scenario, Continual Reinforcement Learning (CRL) has emerged as a new paradigm that unifies classical RL with principles of Continual Learning (CL), whose primary goal is to avoid a phenomenon known as catastrophic forgetting, in which adapting to new concepts induces the agent to forget what it has learned so far. This thesis proposes integrating a CRL-based framework with principles of Streaming Continual Learning, a recent research area that, alongside CL, includes the analysis of agent adaptation over time using Streaming Machine Learning methodologies. As a result, we propose a new unified paradigm, called Streaming Continual Reinforcement Learning. The objective is to face situations in which changes can affect both visual features and the environment’s intrinsic dynamics, leading the agent to redefine, partially or completely, its learning policy. During experiments, we demonstrated that different strategies address those changes in different ways, and we proposed a unified architectural approach suited to the new context.| File | Dimensione | Formato | |
|---|---|---|---|
|
2026_03_Maccagni_Tesi.pdf
accessibile in internet per tutti a partire dal 24/02/2027
Descrizione: Testo della Tesi
Dimensione
1.08 MB
Formato
Adobe PDF
|
1.08 MB | Adobe PDF | Visualizza/Apri |
|
2026_03_Maccagni_Executive_Summary.pdf
accessibile in internet per tutti a partire dal 27/02/2027
Descrizione: Testo dell'Executive Summary
Dimensione
560.48 kB
Formato
Adobe PDF
|
560.48 kB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/251180