Over the last years, sequential decision-making problems have received significant attention in AI, as they model a broad spectrum of real-world problems. In these problems, an agent interacts with an environment over time to achieve a specific goal, and the key peculiarity is that each decision the agent makes will influence future options and outcomes. In this context, a large variety of general techniques have been developed over the years to address various challenges. For example, these methodologies include multi-armed bandit algorithms, as well as reinforcement learning and inverse reinforcement learning methods. Although these techniques have shown promising, and even surprising, results, they often require a large number of interactions with the environment to achieve a satisfactory performance level. However, in several real-world applications, imprecise but cheaper data (e.g., interactions generated by using a low-fidelity model of the environment) can be collected and exploited to make the training process more efficient. In this context, multi-fidelity learning algorithms have recently gained attention as a promising option for balancing performance and learning efficiency. In this dissertation, we explore multi-fidelity learning from several novel perspectives. Our primary goal is to enhance learning efficiency in sequential decision-making problems by understanding how to leverage cheaper, yet less informative data, which commonly appear in various scenarios. The contributions can thus be broadly divided into three parts. In the first part, we study a novel multi-fidelity variant of the best-arm identification problem. In the classical best-arm identification problem, the agent has access to a set of arms, each associated with a reward signal, and it needs to identify the optimal arm while minimizing the number of interactions with the environment. In the multi-fidelity variant, instead, the agent has access to a set of cheaper approximations for each arm, that can exploited to reduce the identification cost. In the second part, we study how to improve Monte Carlo estimation techniques for sequential decision-making problems by leveraging the reset function commonly available in many reinforcement learning simulators. Specifically, we identify and seek to exploit a hidden, intrinsic multi-fidelity structure within Monte Carlo Reinforcement Learning algorithms, that is the ability of an agent to truncate a Monte Carlo trajectory to obtain a less informative, but cheaper, interaction with the environment. Finally, in the third part, we investigate the inverse reinforcement learning problem, where the goal is to infer a reward function that explains the behavior of an expert agent acting in a sequential decision-making environment. In this dissertation, we consider a multi-fidelity variant in which the learning system can also observe the behavior of multiple sub-optimal experts. Indeed, gathering demonstrations of sub-optimal behavior is usually cheaper than observing the policy of an optimal agent.
Negli ultimi anni, i problemi di decisione sequenziali hanno ricevuto molta attenzione nel campo dell'intelligenza artificiale, in quanto modellano un ampio spettro di problemi significativi. In questi scenari, un agente interagisce nel tempo con un ambiente al fine di raggiungere un certo scopo, e il concetto chiave è che ogni decisione dell'agente influenzerà le opzioni future e i risultati. In questo contesto, diverse tecniche generali sono state sviluppate negl corso degli anni per risolvere varie sfide. Per esempio, queste metodologie includono algoritmi multi-armed bandit, oltre a metodi di reinforcement learning e inverse reinforcement learning. Nonostante queste tecniche abbiamo mostrato risultati promettenti, e talvolta sorprendenti, richiedono spesso un numero significativamente alto di interazioni con l'ambiente per raggiungere risultati soddisfanceti. Tuttavia, in molte applicazioni reali, è possibile raccogliere e sfruttare dati imprecisi ma meno costosi (ad esempio, interazioni generate interagendo con un modello a bassa fedeltà dell'ambiente) al fine di rendere il processo di addestramento più efficiente. Per questa ragione, algoritmi di multi-fidelity learning hanno recentemente attirato l'attenzione come una strada promettente per bilanciare le prestazioni e l'efficienza dell'apprendimento. In questa dissertazione, esploreremo problemi legati all'apprendimento multi-fidelity da diverse prospettive innovative. Il nostro obiettivo primario è migliorare l'efficienza dell'apprendimento in problemi di decisione sequenziale, sfruttando dati meno costosi, ma anche meno informativi, che sono disponibili in diversi scenari. I nostri contributi, in tal senso, possono essere suddivisi in tre parti. Nella prima parte, studiamo una nuova variante multi-fidelity del problema best-arm identification. Nel problema classico di best-arm identification, l'agente ha accesso a un insieme di braccia, ciascuna associata a un segnale di ricompensa, e deve identificare il braccio migliore minimizzando il numero di interazioni con l'ambiente. Nella variante multi-fidelity, invece, l'agente ha anche accesso a un insieme di approssimazioni meno costose per ciascun braccio, che può sfruttare per ridurre il costo di identificazione. Nella seconda parte, studiamo come migliorare le tecniche di stima Monte Carlo per i problemi di decisione sequenziale sfruttando la funzione di reset comunemente disponibile in molti simulatori utilizzati per reinforcement learning. Più nello specifico, abbiamo identificato e cerchiamo di sfruttare una struttura multi-fidelity nascosta e intrinseca all'interno degli algoritmi Monte Carlo di reinforcement learning, ovvero la capacità di un agente di interrompere una traiettoria Monte Carlo per ottenere un'interazione meno informativa, ma meno costosa, con l'ambiente. Infine, nella terza parte, investigiamo il problema di inverse reinforcement learning, in cui l'obiettivo è dedurre una funzione di rinforzo che sia in grado di spiegare il comportamento di un agente esperto che opera in un ambiente di decisione sequenziale. In questa dissertazione, consideriamo una variante multi-fidelity in cui il sistema di apprendimento può anche osservare il comportamento di molteplici esperti sub-ottimali. Infatti, raccogliere dimostrazioni di comportamenti sub-ottimali è di solito meno costoso rispetto all'osservazione della politica di un agente ottimale.
Novel perspectives on multi-fidelity sequential decision-making
Poiani, Riccardo
2024/2025
Abstract
Over the last years, sequential decision-making problems have received significant attention in AI, as they model a broad spectrum of real-world problems. In these problems, an agent interacts with an environment over time to achieve a specific goal, and the key peculiarity is that each decision the agent makes will influence future options and outcomes. In this context, a large variety of general techniques have been developed over the years to address various challenges. For example, these methodologies include multi-armed bandit algorithms, as well as reinforcement learning and inverse reinforcement learning methods. Although these techniques have shown promising, and even surprising, results, they often require a large number of interactions with the environment to achieve a satisfactory performance level. However, in several real-world applications, imprecise but cheaper data (e.g., interactions generated by using a low-fidelity model of the environment) can be collected and exploited to make the training process more efficient. In this context, multi-fidelity learning algorithms have recently gained attention as a promising option for balancing performance and learning efficiency. In this dissertation, we explore multi-fidelity learning from several novel perspectives. Our primary goal is to enhance learning efficiency in sequential decision-making problems by understanding how to leverage cheaper, yet less informative data, which commonly appear in various scenarios. The contributions can thus be broadly divided into three parts. In the first part, we study a novel multi-fidelity variant of the best-arm identification problem. In the classical best-arm identification problem, the agent has access to a set of arms, each associated with a reward signal, and it needs to identify the optimal arm while minimizing the number of interactions with the environment. In the multi-fidelity variant, instead, the agent has access to a set of cheaper approximations for each arm, that can exploited to reduce the identification cost. In the second part, we study how to improve Monte Carlo estimation techniques for sequential decision-making problems by leveraging the reset function commonly available in many reinforcement learning simulators. Specifically, we identify and seek to exploit a hidden, intrinsic multi-fidelity structure within Monte Carlo Reinforcement Learning algorithms, that is the ability of an agent to truncate a Monte Carlo trajectory to obtain a less informative, but cheaper, interaction with the environment. Finally, in the third part, we investigate the inverse reinforcement learning problem, where the goal is to infer a reward function that explains the behavior of an expert agent acting in a sequential decision-making environment. In this dissertation, we consider a multi-fidelity variant in which the learning system can also observe the behavior of multiple sub-optimal experts. Indeed, gathering demonstrations of sub-optimal behavior is usually cheaper than observing the policy of an optimal agent.File | Dimensione | Formato | |
---|---|---|---|
thesis.pdf
accessibile in internet per tutti
Dimensione
2.02 MB
Formato
Adobe PDF
|
2.02 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/237093