Over the years, the interest developed around the solving capabilities of Machine Learning techniques has led the latter to become the key to approach many problems in different fields. The success is due to the ability of the algorithms to achieve a goal by learning more or less autonomously from experience. The effectiveness of this framework is probably due to the similarity with the learning modes inherent in human nature: to understand how to solve a problem or complete a task, we need to have already faced the same situation or something that resembles it. All this has led research to a higher level of abstraction in order to fully exploit the experience accumulated by the algorithm, resulting in the use of what is called Meta Learning: leveraging previous tasks to learn how to learn and thus be able to solve future tasks more effectively. In this paper we will see how to apply this type of setting to a branch of algorithms known as Multi Armed Bandit. A Reinforcement Learning framework that is as simple as it is powerful, very useful in online type settings that are constrained by partial observability of the environment in which they operate. In particular, this work will focus on problems related to Game Theory and the parallelism between the actions of the opponent and tasks to be solved. Starting with an introduction on basic concepts about MABs and extensive form game solving, the work illustrates the classical principles and uses of Meta Learning and a particular class of MABs that are known as Linear Contextual Bandits, characterized by the use during learning of a context that describes the space of possible actions. We then go on to a description of the use of such algorithms for solving games in extensive form. Inspired by the concept of Meta Learning, we finally present BiasedCOX, an algorithm that uses regularization techniques to improve the learning capabilities over exploiting an opponent with constrained expected utility. Lastly, we report theoretical and empirical comparison with respect to the efficacy of the regularization in this type of setting.
Nel corso degli anni, l'interesse sviluppato attorno alle capacità risolutive delle tecniche di Machine Learning ha portato quest'ultimo a diventare la chiave per approcciare molti problemi in differenti campi. Il successo è dovuto alla capacità degli algoritmi di raggiungere un obiettivo imparando più o meno autonomamente dall'esperienza. L'efficacia di questo framework è dovuta probabilmente alla somiglianza con le modalità di apprendimento insite nella natura umana: per capire come risolvere un problema o portare a termine un compito, necessitiamo di avere già affrontato la stessa situazione o qualcosa che le somigliasse. Tutto ciò ha portato la ricerca ad un livello di astrazione superiore per poter sfruttare fino in fondo l'esperienza accumulata dall'algoritmo, sfociando nell'utilizzo di quello che viene definito Meta Apprendimento: sfruttare precedenti compiti per imparare come imparare e così poter risolver più efficacemente compiti futuri. In questo lavoro vedremo come applicare questo tipo di setting ad una branca di algoritmi conosciuta come Multi Armed Bandit. Un framework di Reinforcement Learning semplice quanto potente, molto utile in setting di tipo online che sono vincolati da una parziale osservabilità dell'ambiente in cui operano. In particolare questo lavoro si concentrerà su problemi relativi alla Teoria dei Giochi e sul parallelismo tra azioni dell'avversario e compiti da risolvere. Iniziando con un'introduzione sui concetti di base dei MAB e sulla risoluzione dei giochi in forma estesa, il lavoro illustra i principi classici e gli usi del Meta Apprendimento e di una particolare classe di MAB noti come Linear Contextual Bandits, caratterizzati dall'uso, durante l'apprendimento, di un contesto che descrive lo spazio delle azioni possibili. Si passa poi alla descrizione dell'uso di tali algoritmi per la risoluzione di giochi in forma estesa. Ispirandoci al concetto di Meta Apprendimento, presentiamo infine BiasedCOX, un algoritmo che usa tecniche di regolarizzazione per migliorare le capacità di apprendimento rispetto allo sfruttamento di un avversario con utilità attesa vincolata. Come ultimo step, riportiamo un confronto teorico ed empirico rispetto all'efficacia della regolarizzazione in questo tipo di setting.
Regularized linear bandit for constrained opponent exploitation in extensive form games
Del Vecchio, Giovanni
2020/2021
Abstract
Over the years, the interest developed around the solving capabilities of Machine Learning techniques has led the latter to become the key to approach many problems in different fields. The success is due to the ability of the algorithms to achieve a goal by learning more or less autonomously from experience. The effectiveness of this framework is probably due to the similarity with the learning modes inherent in human nature: to understand how to solve a problem or complete a task, we need to have already faced the same situation or something that resembles it. All this has led research to a higher level of abstraction in order to fully exploit the experience accumulated by the algorithm, resulting in the use of what is called Meta Learning: leveraging previous tasks to learn how to learn and thus be able to solve future tasks more effectively. In this paper we will see how to apply this type of setting to a branch of algorithms known as Multi Armed Bandit. A Reinforcement Learning framework that is as simple as it is powerful, very useful in online type settings that are constrained by partial observability of the environment in which they operate. In particular, this work will focus on problems related to Game Theory and the parallelism between the actions of the opponent and tasks to be solved. Starting with an introduction on basic concepts about MABs and extensive form game solving, the work illustrates the classical principles and uses of Meta Learning and a particular class of MABs that are known as Linear Contextual Bandits, characterized by the use during learning of a context that describes the space of possible actions. We then go on to a description of the use of such algorithms for solving games in extensive form. Inspired by the concept of Meta Learning, we finally present BiasedCOX, an algorithm that uses regularization techniques to improve the learning capabilities over exploiting an opponent with constrained expected utility. Lastly, we report theoretical and empirical comparison with respect to the efficacy of the regularization in this type of setting.File | Dimensione | Formato | |
---|---|---|---|
2022_04_del_Vecchio_02.pdf
accessibile in internet per tutti
Descrizione: Executive Summary
Dimensione
1.01 MB
Formato
Adobe PDF
|
1.01 MB | Adobe PDF | Visualizza/Apri |
2022_04_del_Vecchio_01.pdf
accessibile in internet per tutti
Descrizione: Master Thesis
Dimensione
2.4 MB
Formato
Adobe PDF
|
2.4 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/187406