With the growth of networked systems and the increasing sensitivity of network appli- cations to delay, smarter and more adaptive congestion control solutions are becoming essential. Traditional Transmission Control Protocol (TCP) congestion control techniques depend on manually constructed, static rules, which frequently do not operate effectively in dynamic network settings [1]. This thesis presents a reinforcement learning (RL)–based framework that autonomously learns and adapts congestion control parameters from real- time network feedback, rather than relying on fixed, rule-based heuristics. The proposed system uses a reinforcement learning agent trained in a Transmission Con- trol Protocol-like simulation environment that mimics sender–receiver communications under various network conditions. The agent receives feedback from the environment (network simulator) in terms of network throughput, delay, packet loss etc. and tries to optimise the values of TCP parameters,namely the sending rate and the congestion win- dow (CWND). Through constant interaction with the environment, the RL agent identifies adaptive strategies to adjust these parameters to enhance overall network efficiency and stability. Although the RL agent effectively learns adaptive congestion-control behavior, its neural- network architecture is computationally demanding and unsuitable for deployment in pro- grammable data planes or embedded environments. To enable practical implementation, a lightweight and interpretable student model is derived through knowledge distillation, en- abling it to reproduce the teacher agent’s decision policy with substantially reduced com- plexity. The distilled model closely matches the teacher’s performance, achieving an R2 of about 0.94, while maintaining comparable accuracy to real network traces (R2 ≈ 0.81). A sensitivity analysis across different bandwidth, delay, loss-rate, and queue-size settings further confirms the robustness and scalability of the proposed architecture for real-time congestion-control deployment.
Con la crescita dei sistemi in rete e la crescente sensibilità delle applicazioni di rete alla latenza, diventano sempre più essenziali soluzioni di controllo della congestione più intel- ligenti e adattive. Le tecniche tradizionali di controllo della congestione del Transmission Control Protocol (TCP) si basano su regole statiche costruite manualmente, che spesso non operano in modo efficace in ambienti di rete dinamici [1]. Questa tesi presenta un framework basato sull’apprendimento per rinforzo (Reinforcement Learning, RL) che ap- prende e adatta autonomamente i parametri di controllo della congestione a partire dal feedback in tempo reale della rete, invece di affidarsi a euristiche fisse basate su regole. Il sistema proposto utilizza un agente RL addestrato in un ambiente di simulazione simile al TCP, che riproduce le comunicazioni tra mittente e destinatario in diverse condizioni di rete. L’agente riceve feedback dall’ambiente (simulatore di rete) in termini di throughput, ritardo, perdita di pacchetti, ecc., e cerca di ottimizzare i valori dei parametri TCP, ossia la velocità di invio e la finestra di congestione (CWND). Attraverso un’interazione continua con l’ambiente, l’agente RL identifica strategie adattive per regolare tali parametri al fine di migliorare l’efficienza e la stabilità complessiva della rete. Sebbene l’agente RL apprenda efficacemente un comportamento di controllo della con- gestione adattivo, la sua architettura basata su reti neurali risulta computazionalmente onerosa e inadatta all’implementazione in programmable data planes o ambienti embedded. Per consentire un’implementazione pratica, è stato derivato un modello studente leggero e interpretabile attraverso la knowledge distillation, che gli consente di riprodurre la politica decisionale dell’agente insegnante con una complessità notevolmente ridotta. Il modello distillato riproduce fedelmente le prestazioni dell’insegnante, raggiungendo un coefficiente di determinazione R2 di circa 0.94, mantenendo al contempo un’accuratezza comparabile rispetto a tracce di rete reali (R2 ≈ 0.81). Un’analisi di sensibilità condotta su diversi valori di banda, ritardo, tasso di perdita e dimensione della coda conferma ulteriormente la robustezza e la scalabilità dell’architettura proposta per l’implementazione del controllo della congestione in tempo reale.
Design and tree-based distillation of a reinforcement learning agent for TCP congestion control
Ameri, Pardis
2024/2025
Abstract
With the growth of networked systems and the increasing sensitivity of network appli- cations to delay, smarter and more adaptive congestion control solutions are becoming essential. Traditional Transmission Control Protocol (TCP) congestion control techniques depend on manually constructed, static rules, which frequently do not operate effectively in dynamic network settings [1]. This thesis presents a reinforcement learning (RL)–based framework that autonomously learns and adapts congestion control parameters from real- time network feedback, rather than relying on fixed, rule-based heuristics. The proposed system uses a reinforcement learning agent trained in a Transmission Con- trol Protocol-like simulation environment that mimics sender–receiver communications under various network conditions. The agent receives feedback from the environment (network simulator) in terms of network throughput, delay, packet loss etc. and tries to optimise the values of TCP parameters,namely the sending rate and the congestion win- dow (CWND). Through constant interaction with the environment, the RL agent identifies adaptive strategies to adjust these parameters to enhance overall network efficiency and stability. Although the RL agent effectively learns adaptive congestion-control behavior, its neural- network architecture is computationally demanding and unsuitable for deployment in pro- grammable data planes or embedded environments. To enable practical implementation, a lightweight and interpretable student model is derived through knowledge distillation, en- abling it to reproduce the teacher agent’s decision policy with substantially reduced com- plexity. The distilled model closely matches the teacher’s performance, achieving an R2 of about 0.94, while maintaining comparable accuracy to real network traces (R2 ≈ 0.81). A sensitivity analysis across different bandwidth, delay, loss-rate, and queue-size settings further confirms the robustness and scalability of the proposed architecture for real-time congestion-control deployment.| File | Dimensione | Formato | |
|---|---|---|---|
|
Pardis_Ameri_Executive_Summary.pdf
accessibile in internet per tutti
Dimensione
1.27 MB
Formato
Adobe PDF
|
1.27 MB | Adobe PDF | Visualizza/Apri |
|
pardis_ameri_thesis.pdf
accessibile in internet solo dagli utenti autorizzati
Dimensione
7.35 MB
Formato
Adobe PDF
|
7.35 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/246721