Restless Multi-Armed Bandits (MABs) are a general framework designed to handle real-world decision-making problems where the expected rewards evolve over time, such as in recommender systems and dynamic pricing. In this thesis, we investigate from a theoretical standpoint two well-known structured subclasses of restless MABs: the rising and the rising concave settings, where the expected reward of each arm evolves over time following an unknown non-decreasing and a non-decreasing concave function, respectively. By providing a novel methodology of independent interest for general restless bandits, we establish new lower bounds on the expected cumulative regret for both settings. In the rising case, we prove a lower bound of order Ω(T²ᐟ³), matching known upper bounds for restless bandits; whereas, in the rising concave case, we derive a lower bound of order Ω(T³ᐟ⁵), proving for the first time that this setting is provably more challenging than stationary MABs. Then, we introduce Rising Concave Budgeted Exploration (RC-BE(α)), a new regret minimization algorithm designed for the rising concave MABs. By devising a novel proof technique, we show that the expected cumulative regret of RC-BE(α) is in the order of Õ(T⁷ᐟ¹¹). These results collectively make a step towards closing the gap in rising concave MABs, positioning them between stationary and general restless bandit settings in terms of statistical complexity.
I Multi-Armed Bandit (MAB) restless forniscono un framework progettato per gestire problemi decisionali in cui le ricompense attese evolvono nel tempo, come accade nei recommender system o nel dynamic pricing. In questa tesi, analizziamo da un punto di vista teorico due note sottoclassi di MAB restless: le configurazioni rising e rising concave, in cui la ricompensa attesa di ciascun braccio evolve nel tempo seguendo rispettivamente una funzione non-decrescente e una funzione non-decrescente concava non note. Fornendo una nuova metodologia di interesse indipendente per i MAB restless generali, stabiliamo nuovi limiti inferiori per il regret cumulativo atteso di entrambe le configurazioni. Nel caso rising, dimostriamo un limite inferiore di ordine Ω(T²ᐟ³), che corrisponde ai limiti superiori noti per i MAB restless; mentre, nel caso rising concave, deriviamo un limite inferiore di ordine Ω(T³ᐟ⁵), dimostrando per la prima volta che questa configurazione è più impegnativa rispetto ai MAB stazionari. Introduciamo quindi Rising Concave Budgeted Exploration (RC-BE(α)), un nuovo algoritmo di minimizzazione del regret progettato per i MAB rising concave. Attraverso una nuova tecnica, dimostriamo che il regret cumulativo atteso di RC-BE(α) è nell'ordine di Õ(T⁷ᐟ¹¹). Questi risultati, nel loro insieme, rappresentano un passo in avanti verso la riduzione del divario tra i limiti inferiori e superiori del regret nei MAB rising concave, posizionandoli tra la configurazione stazionaria e quella dei MAB restless generali in termini di complessità statistica.
Towards closing the gap in Restless Rising Bandits
MIGALI, CRISTIANO
2024/2025
Abstract
Restless Multi-Armed Bandits (MABs) are a general framework designed to handle real-world decision-making problems where the expected rewards evolve over time, such as in recommender systems and dynamic pricing. In this thesis, we investigate from a theoretical standpoint two well-known structured subclasses of restless MABs: the rising and the rising concave settings, where the expected reward of each arm evolves over time following an unknown non-decreasing and a non-decreasing concave function, respectively. By providing a novel methodology of independent interest for general restless bandits, we establish new lower bounds on the expected cumulative regret for both settings. In the rising case, we prove a lower bound of order Ω(T²ᐟ³), matching known upper bounds for restless bandits; whereas, in the rising concave case, we derive a lower bound of order Ω(T³ᐟ⁵), proving for the first time that this setting is provably more challenging than stationary MABs. Then, we introduce Rising Concave Budgeted Exploration (RC-BE(α)), a new regret minimization algorithm designed for the rising concave MABs. By devising a novel proof technique, we show that the expected cumulative regret of RC-BE(α) is in the order of Õ(T⁷ᐟ¹¹). These results collectively make a step towards closing the gap in rising concave MABs, positioning them between stationary and general restless bandit settings in terms of statistical complexity.| File | Dimensione | Formato | |
|---|---|---|---|
|
2025_07_Migali_Tesi.pdf
accessibile in internet per tutti
Descrizione: Tesi
Dimensione
1.13 MB
Formato
Adobe PDF
|
1.13 MB | Adobe PDF | Visualizza/Apri |
|
2025_07_Migali_Executive_Summary.pdf
accessibile in internet per tutti
Descrizione: Executive Summary
Dimensione
698.79 kB
Formato
Adobe PDF
|
698.79 kB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/240041