Converged space-ground networks are essential for future global connectivity, but the highly dynamic nature of LEO (Low Earth Orbit) constellations introduces complex routing challenges. Constant satellite movement and varying topology render optimal paths unstable, while traffic creates real-time congestion. The objective of this thesis is to design, simulate, and compare routing strategies for this demanding environment. To achieve this, a discrete-event network simulator was developed in Python, modeling a LEO constellation integrated with a terrestrial backbone. The analysis focuses on comparing two approaches for routing within the satellite segment: (1) a baseline Dijkstra algorithm, implemented as a centralized, snapshot-based approach that re-calculates the optimal path at every constellation movement, and (2) a decentralized Reinforcement Learning algorithm, where agents (satellites) learn optimal, congestion-aware paths. Simulations were conducted under medium-to-high traffic load scenarios. The results demonstrate the clear limitations of the Dijkstra approach: unable to react to instantaneous congestion, it suffers a performance collapse under high load, with queueing latency becoming the dominant delay factor. Q-Learning, in contrast, successfully learned a load-balancing policy, as evidenced by congestion maps. Steady-state analysis demonstrates that the learned Q-Learning policy can effectively mitigate queueing latency, even in the highest-traffic scenarios.
Le reti ibride convergenti Spazio-Terra sono fondamentali per la futura connettività globale, ma la natura altamente dinamica delle costellazioni LEO (Low Earth Orbit) introduce sfide di routing complesse. Il movimento costante dei satelliti e la topologia variabile rendono i percorsi ottimali instabili, mentre il traffico genera congestione in tempo reale. L'obiettivo di questa tesi è progettare, simulare e confrontare strategie di routing per questo ambiente esigente. Per raggiungere questo scopo, è stato sviluppato un simulatore di rete a eventi discreti in Python, modellando una costellazione LEO integrata con un backbone terrestre. L'analisi si concentra sul confronto di due approcci per il routing all'interno del segmento satellitare: (1) un algoritmo Dijkstra di baseline, implementato come approccio centralizzato che ricalcola il percorso ottimale ad ogni movimento della costellazione, e (2) un algoritmo di Reinforcement Learning decentralizzato, in cui gli agenti (i satelliti) imparano percorsi consapevoli della congestione. Le simulazioni sono state condotte in scenari di traffico da medio a elevato. I risultati dimostrano i chiari limiti dell'approccio Dijkstra: incapace di reagire alla congestione istantanea, subisce un collasso delle prestazioni all'aumentare del carico, con la latenza di accodamento che diventa il fattore di ritardo dominante. Il Q-Learning, al contrario, ha dimostrato di apprendere con successo una politica di bilanciamento del carico. L'analisi a regime dimostra che la politica di Reinforcement Learning appresa è in grado di mitigare efficacemente la latenza di accodamento, anche negli scenari di traffico più elevato.
Reinforcement learning for routing optimization in converged space-ground networks
CONCA, ALESSANDRO
2024/2025
Abstract
Converged space-ground networks are essential for future global connectivity, but the highly dynamic nature of LEO (Low Earth Orbit) constellations introduces complex routing challenges. Constant satellite movement and varying topology render optimal paths unstable, while traffic creates real-time congestion. The objective of this thesis is to design, simulate, and compare routing strategies for this demanding environment. To achieve this, a discrete-event network simulator was developed in Python, modeling a LEO constellation integrated with a terrestrial backbone. The analysis focuses on comparing two approaches for routing within the satellite segment: (1) a baseline Dijkstra algorithm, implemented as a centralized, snapshot-based approach that re-calculates the optimal path at every constellation movement, and (2) a decentralized Reinforcement Learning algorithm, where agents (satellites) learn optimal, congestion-aware paths. Simulations were conducted under medium-to-high traffic load scenarios. The results demonstrate the clear limitations of the Dijkstra approach: unable to react to instantaneous congestion, it suffers a performance collapse under high load, with queueing latency becoming the dominant delay factor. Q-Learning, in contrast, successfully learned a load-balancing policy, as evidenced by congestion maps. Steady-state analysis demonstrates that the learned Q-Learning policy can effectively mitigate queueing latency, even in the highest-traffic scenarios.| File | Dimensione | Formato | |
|---|---|---|---|
|
2025_12_Conca.pdf
accessibile in internet per tutti
Dimensione
8.13 MB
Formato
Adobe PDF
|
8.13 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/247615