Due to the increasing complexity of communication networks and the desire of Internet Service Providers (ISPs) to use their resources effectively, the demand for fast and scalable resource allocation solutions has been recently dramatically increased. To cope with such network evolution, the way of managing and configuring communication networks turned into automatic administration. As a result, machine learning (ML) solutions have been implemented to the field of networking. Being a branch of ML, Reinforcement Learning (RL) has a learning process where an agent decides an action according to current states of its environment, observes the results and then adjusts its future strategy to devise an optimal decision-making policy without having any prior knowledge about the environment. Over the last two decades, RL has been applied in many fields of communication and networking, including, but not limited to, access, caching, security, scheduling and routing. In recent years, improvement on RL has been introduced as it merges with Deep Learning (DL). DL is a subspecialty of ML that creates patterns for complex problems to develop decision-making mechanism by utilizing Artificial Neural Networks (ANN) that encouraged by architecture and functionality of the human brain. Therefore, Deep Reinforcement Learning (DRL) was generated and has been applied to solve both static and dynamic optimization problems in the literature. In our study, we focused on benefits of DRL approach to solve static optimization problems in optical networks, namely Routing and Wavelength Assignment (RWA) problem. Usage of RL in static optimization problems has been investigated in the literature, but, to the best of our knowledge, most of the existing works propose RL solution and compare it only with other existing RL algorithms or with simplistic baseline heuristics. In this work, we aim at systematically analyzing the performance of RL while comparing it also with metaheuristic method (Genetic Algorithm) and exact solution such as Integer Linear Programming (ILP). While in previous works it has been already observed that RL is a powerful and promising technique to deal with optimization problems, in this study we observe that high adaptability and relatively low complexity of RL will be key factors for further adoption of RL in real-life implementations. Therefore, in our work, we provided a thorough analysis of RL algorithm in terms of optimality gap with respect to exact models for optimization, but also in terms of adaptability to changing problem inputs and in terms of complexity (evaluated as computational time and number of episodes) to identify under which conditions, RL is a competitive alternative to existing techniques to solve static RWA problem. To perform this analysis, we operated a DRL algorithm based on Actor-Critic and Experience Replay (ACER) algorithm which has been developed by Google group in 2017. As comparison terms, in addition to baseline heuristics as Shortest Path algorithm, Shortest Available Path algorithm and Least-Loaded Path algorithm, the DRL algorithm is also contrasted against a metaheuristic method (Genetic Algorithm) and ILP, in terms of solution optimality and speed. Numerical results show that RL could provide fast and near-optimal solutions to the RWA problem after a reasonable amount of training (depending on problem complexity), however training period of agent and tuning dependency are the main barriers to a widespread adoption of RL in real-life implementations.
A partire dall'utilizzo delle reti di comunicazione primarie, la complessità delle reti, il desiderio degli Internet Service Providers (ISP) di utilizzare le proprie risorse in modo efficace e la richiesta di soluzioni di routing veloce sono aumentati notevolmente. Durante questa evoluzione, la gestione e la configurazione delle reti di telecomunicazioni si sono automatizzati. Di conseguenza, le soluzioni basate su Machine Learning (ML) sono state implementate nel campo del networking. Essendo una branca del ML, il Reinforcement Learning (RL) consiste in un processo di apprendimento in cui un agente decide un'azione in base agli stati attuali dell’ambiente, osserva i risultati e quindi regola la sua strategia futura per elaborare una politica decisionale ottimale senza avere alcuna precedente conoscenza dell'ambiente. Negli ultimi due decenni, gli algoritmi di RL sono stati applicati in molti campi delle telecomunicazioni e del networking, fra cui accesso, caching sicurezza, routing and planning. Negli ultimi anni, sono stati introdotti miglioramenti su RL mediante una fusione con il Deep Learning (DL). Il DL è un campo di ricerca di ML che crea modelli per problemi complessi con lo scopo di sviluppare meccanismi decisionali utilizzando reti neurali artificiali ispirate dall'architettura e dalla funzionalità del cervello umano. Pertanto, il Deep Reinforcement Learning (DRL) è stato generato ed è stato applicato per risolvere i problemi di ottimizzazione statica e dinamica in letteratura. Nel nostro studio, ci siamo concentrati sulla versione statica del problema di Routing and Wavelength Assignment (RWA) per indagare i vantaggi dell'approccio basato su DRL per risolvere i problemi di ottimizzazione statica nelle reti ottiche. L'utilizzo di RL nei problemi di ottimizzazione statica esiste in letteratura, ma al meglio delle nostre conoscenze, la maggior parte dei lavori propone una soluzione basata su RL e la confronta solo con altri algoritmi basati su RL esistenti o euristiche di base. In questo lavoro, ci proponiamo di analizzare sistematicamente le prestazioni di RL confrontandole anche con metodi metaeuristici (Algoritmo genetico) e soluzioni esatte come la Programmazione Lineare Intera (ILP). Mentre in lavori precedenti è stato già osservato che RL è una tecnica potente e promettente per affrontare problemi di ottimizzazione, in questo studio osserviamo che l'elevata adattabilità e la complessità relativamente bassa di RL saranno fattori chiave per l'ulteriore adozione di RL nella vita reale implementazioni. Pertanto, nel nostro lavoro, abbiamo fornito un'analisi approfondita dell'algoritmo di RL in termini di gap di ottimalità rispetto a modelli esatti per l'ottimizzazione, ma anche in termini di adattabilità ai mutevoli ambienti e in termini di complessità (valutata come tempo computazionale e numero di episodi) per vedere in quali condizioni, RL è un'alternativa competitiva alle tecniche esistenti per risolvere il problema RWA statico. Per eseguire questa analisi, abbiamo utilizzato un algoritmo basato su DRL che prende decisioni in un ambiente di RWA statico basato sull'algoritmo Actor-Critic and Experience Replay (ACER) che è stato sviluppato dal gruppo Google nel 2017. Oltre all'euristica di base come l’algoritmo Shortest Path e gli algoritmi Shortest Available Path e Least-Loaded Path, confrontiamo l'algoritmo DRL con un metodo metaeuristico (Genetic Algorithm) e ILP. I risultati numerici mostrano che l’algoritmo basato su RL potrebbe fornire soluzioni rapide e quasi ottimali al problema di RWA dopo una quantità ragionevole di addestramento (il quale dipende dalla complessità del problema), tuttavia il periodo di addestramento dell'agente e la dipendenza dal tuning sono le principali barriere di fronte al potenziale di RL.
Investigating deep reinforcement learning for static optimization in optical networks
MERCAN, EMRE FURKAN
2019/2020
Abstract
Due to the increasing complexity of communication networks and the desire of Internet Service Providers (ISPs) to use their resources effectively, the demand for fast and scalable resource allocation solutions has been recently dramatically increased. To cope with such network evolution, the way of managing and configuring communication networks turned into automatic administration. As a result, machine learning (ML) solutions have been implemented to the field of networking. Being a branch of ML, Reinforcement Learning (RL) has a learning process where an agent decides an action according to current states of its environment, observes the results and then adjusts its future strategy to devise an optimal decision-making policy without having any prior knowledge about the environment. Over the last two decades, RL has been applied in many fields of communication and networking, including, but not limited to, access, caching, security, scheduling and routing. In recent years, improvement on RL has been introduced as it merges with Deep Learning (DL). DL is a subspecialty of ML that creates patterns for complex problems to develop decision-making mechanism by utilizing Artificial Neural Networks (ANN) that encouraged by architecture and functionality of the human brain. Therefore, Deep Reinforcement Learning (DRL) was generated and has been applied to solve both static and dynamic optimization problems in the literature. In our study, we focused on benefits of DRL approach to solve static optimization problems in optical networks, namely Routing and Wavelength Assignment (RWA) problem. Usage of RL in static optimization problems has been investigated in the literature, but, to the best of our knowledge, most of the existing works propose RL solution and compare it only with other existing RL algorithms or with simplistic baseline heuristics. In this work, we aim at systematically analyzing the performance of RL while comparing it also with metaheuristic method (Genetic Algorithm) and exact solution such as Integer Linear Programming (ILP). While in previous works it has been already observed that RL is a powerful and promising technique to deal with optimization problems, in this study we observe that high adaptability and relatively low complexity of RL will be key factors for further adoption of RL in real-life implementations. Therefore, in our work, we provided a thorough analysis of RL algorithm in terms of optimality gap with respect to exact models for optimization, but also in terms of adaptability to changing problem inputs and in terms of complexity (evaluated as computational time and number of episodes) to identify under which conditions, RL is a competitive alternative to existing techniques to solve static RWA problem. To perform this analysis, we operated a DRL algorithm based on Actor-Critic and Experience Replay (ACER) algorithm which has been developed by Google group in 2017. As comparison terms, in addition to baseline heuristics as Shortest Path algorithm, Shortest Available Path algorithm and Least-Loaded Path algorithm, the DRL algorithm is also contrasted against a metaheuristic method (Genetic Algorithm) and ILP, in terms of solution optimality and speed. Numerical results show that RL could provide fast and near-optimal solutions to the RWA problem after a reasonable amount of training (depending on problem complexity), however training period of agent and tuning dependency are the main barriers to a widespread adoption of RL in real-life implementations.File | Dimensione | Formato | |
---|---|---|---|
2021_06_Mercan.pdf
non accessibile
Descrizione: thesis text
Dimensione
3.39 MB
Formato
Adobe PDF
|
3.39 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/175757