Recent work has shown the potential of applying reinforcement learning (RL) to radio resource management (RRM) in radio access networks (RAN) thanks to their ability to learn complex parameters relationships by interaction with an unknown environment. However, real-world RAN deployments are challenging as the training phase may temporarily degrade the system’s performance up to, potentially, triggering link failure procedures. To address such issues, we consider a particular family of RL algorithms, namely offline RL methods, where agents are trained on previously collected datasets of transitions. We investigate the application of offline RL methods for link adaptation (LA) - a crucial functionality of RRM. In particular, we design and evaluate multiple offline RL algorithms for LA, including batch-constrained Q-learning (BCQ), conservative Q-learning (CQL), behavioral cloning (BC) and decision transformer (DT), and compare their performance against two baselines: an outer-loop link adaptation (OLLA) algorithm, as commonly adopted in today’s RAN systems; and an online off-policy deep Q-network (DQN) algorithm for LA. Since DT, differently from other offline RL methods, treats RL as a sequence modelling problem, we propose modifications to the algorithm design and the problem formulation according the use-case. Our results show that all considered methods reach comparable performance to the DQN policy (with also a slight improvement) and better performance than OLLA. However, contrary to traditional applications of transformer models, which benefit from long input sequence for generative predictions, we observed that the stochastic nature of the radio systems prevents DT to properly condition the generation of optimal actions at inference time on long input sequences, whereas performing well with shorter sequences. The thesis also includes a preliminary study conducted on Gym environments providing insights into the influence of the training datasets on performances of offline RL policies.
Pubblicazioni recenti mostrano il potenziale dell’applicare tecniche di apprendimento per rinforzo alla gestione delle risorse radio nelle reti di accesso radio (RAN) grazie alla loro abilità nell’imparare relazioni complesse tra parametri interagendo con un ambiente sconosciuto. Ciononostante, nel mondo reale, mettere in produzione agenti di questo tipo su reti RAN può risultare problematico siccome la fase di allenamento ridurrebbe temporaneamente le prestazioni del sistema fino a, potenzialmente, attivare le procedure di fallimento previste per il collegamento. Per aggirare queste problematiche consideriamo in questa tesi una particolare categoria di algoritmi di apprendimento per rinforzo che prende il nome di apprendimento per rinforzo offline, la quale prevede l’allenamento di agenti su dati collezionati in precedenza, senza interagire con l’ambiente. In particolare studiamo l’applicazione di questi metodi alle funzionalità di adattamento del canale (LA), chiave tra quelle incluse nella gestione delle risorse radio. In particolare, progettiamo e valutiamo diversi algoritmi offline, tra cui batch-constrained Q-learning (BCQ), conservative Q-learning (CQL), behavioral cloning (BC) e decision transformer (DT) e compariamo le loro prestazioni rispetto a due riferimenti: OLLA, un algoritmo di controllo ad oggi comunemente usato nelle reti RAN ed una implementazione online dell’algoritmo deep Q-network (DQN). Dato che DT tratta i problemi di apprendimento per rinforzo da un punto di vista di modellamento di sequenze, in questo lavoro proponiamo una formulazione del nostro caso d’uso che sia compatibile con tale formulazione. I nostri risultati mostrano che tutti i metodi presi in considerazione offrono prestazioni leggermente più elevate dell’algoritmo DQN preso come riferimento e ottengono risultati migliori di OLLA. Contrariamente a quanto si osserva solitamente con modelli transformer, DT soffre l’eccessiva stocasticità dei processi di adattamento del canale e mostra difficoltà nel condizionare l’inferenza di azioni ottimali con lunghe sequenze fornite in ingresso al modello. La tesi include inoltre uno studio preliminare condotto su ambienti Gym che fornisce delle conclusioni interessanti riguardo l’influenza dei dati sulle prestazioni degli algoritmi di apprendimento per rinforzo offline.
Offline Reinforcement Learning for Radio Resource Management in Radio Access Networks
PERI, SAMUELE
2023/2024
Abstract
Recent work has shown the potential of applying reinforcement learning (RL) to radio resource management (RRM) in radio access networks (RAN) thanks to their ability to learn complex parameters relationships by interaction with an unknown environment. However, real-world RAN deployments are challenging as the training phase may temporarily degrade the system’s performance up to, potentially, triggering link failure procedures. To address such issues, we consider a particular family of RL algorithms, namely offline RL methods, where agents are trained on previously collected datasets of transitions. We investigate the application of offline RL methods for link adaptation (LA) - a crucial functionality of RRM. In particular, we design and evaluate multiple offline RL algorithms for LA, including batch-constrained Q-learning (BCQ), conservative Q-learning (CQL), behavioral cloning (BC) and decision transformer (DT), and compare their performance against two baselines: an outer-loop link adaptation (OLLA) algorithm, as commonly adopted in today’s RAN systems; and an online off-policy deep Q-network (DQN) algorithm for LA. Since DT, differently from other offline RL methods, treats RL as a sequence modelling problem, we propose modifications to the algorithm design and the problem formulation according the use-case. Our results show that all considered methods reach comparable performance to the DQN policy (with also a slight improvement) and better performance than OLLA. However, contrary to traditional applications of transformer models, which benefit from long input sequence for generative predictions, we observed that the stochastic nature of the radio systems prevents DT to properly condition the generation of optimal actions at inference time on long input sequences, whereas performing well with shorter sequences. The thesis also includes a preliminary study conducted on Gym environments providing insights into the influence of the training datasets on performances of offline RL policies.File | Dimensione | Formato | |
---|---|---|---|
samuele_peri_executive_summary.pdf
accessibile in internet per tutti
Descrizione: Executive Summary
Dimensione
919.29 kB
Formato
Adobe PDF
|
919.29 kB | Adobe PDF | Visualizza/Apri |
samuele_peri_masters_thesis_updated.pdf
accessibile in internet per tutti
Descrizione: Master's Thesis Report
Dimensione
2.04 MB
Formato
Adobe PDF
|
2.04 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/227885