The growth in complexity of machine learning models, especially with the rise of Large Language Models (LLMs) reaching trillions of parameters, has changed the computation and communication requirements for distributed training. Current LLM training requires over 25,000 GPUs, overcoming the computational capacity of most existing datacenters, and forcing us to distribute training across multiple geo-distributed datacenters. The connectivity among these datacenters has to be achieved through Wide Area Networks (WANs), which significantly increases the complexity in communication between the GPUs, and introduces challenges in maintaining a high-quality connectivity. Open-source simulators play a crucial role in studying and optimizing these complex distributed training systems, as they enable researchers to evaluate different configurations and strategies without the expensive cost of running experiments on actual infrastructure. However, Current open-source simulators for distributed training (e.g., among the most influential, the ASTRA-SIM framework) are mainly designed for intra-datacenter environments with homogeneous, high-bandwidth interconnections. These simulators lack the capability to accurately model the hierarchical nature of inter-datacenter networks, which are characterized by heterogeneous bandwidth availability, higher latency, congestion, and other communication overheads. This is a critical gap in our ability to evaluate, simulate and optimize inter-datacenter training. In this thesis, we present an extension to ASTRA-sim framework that enables accurate simulation of distributed machine learning workloads across multiple geo-distributed datacenters. Our extension introduces three key architectural modifications to incorporate WAN awareness: (1) a hierarchical switch taxonomy with three distinct types, intra-datacenter switches using cut-through forwarding with PFC/DCQCN congestion control, border switches implementing RDMA-to-IP protocol translation, and WAN switches employing store-and-forward policies, (2) a dual-layer topology abstraction that creates virtual links between datacenters by aggregating physical WAN path characteristics (minimum bandwidth and cumulative latency) while maintaining actual hop-by-hop routing via ECMP, and (3) network feedback APIs providing real-time bandwidth and congestion state visibility to collective communication algorithms. Through systematic experimental validation, we evaluate training performance across varying WAN bandwidths, propagation delays, datacenter topologies, chunk sizes and congestion control mechanisms (PFC-only vs. DCQCN). This enhanced simulation framework enables modeling distributed training in inter-datacenter scenarios and allows to evaluate and compare different policies inside and outside of the datacenter, resulting in more realistic simulations.
La crescita della complessità dei modelli di machine learning, specialmente con l'avvento dei Large Language Models (LLM) che raggiungono trilioni di parametri, ha modificato i requisiti di calcolo e comunicazione per l'addestramento distribuito. L'attuale addestramento degli LLM richiede oltre 25.000 GPU, superando la capacità computazionale della maggior parte dei datacenter esistenti, e costringendoci a distribuire l'addestramento attraverso multipli datacenter geo-distribuiti. La connettività tra questi datacenter deve essere realizzata attraverso Wide Area Networks (WAN), il che aumenta significativamente la complessità nella comunicazione tra le GPU e introduce sfide nel mantenere una connettività di alta qualità. I simulatori open-source svolgono un ruolo cruciale nello studio e nell'ottimizzazione di questi complessi sistemi di addestramento distribuito, poiché permettono ai ricercatori di valutare diverse configurazioni e strategie senza il costoso onere di eseguire esperimenti sull'infrastruttura reale. Tuttavia, gli attuali simulatori open-source per l'addestramento distribuito (ad esempio, tra i più influenti, il framework ASTRA-SIM) sono principalmente progettati per ambienti intra-datacenter con interconnessioni omogenee ad alta larghezza di banda. Questi simulatori mancano della capacità di modellare accuratamente la natura gerarchica delle reti inter-datacenter, che sono caratterizzate da disponibilità eterogenea di larghezza di banda, latenza più elevata, congestione e altri overhead di comunicazione. Questo rappresenta un gap critico nella nostra capacità di valutare, simulare e ottimizzare l'addestramento inter-datacenter. In questa tesi, presentiamo un'estensione al framework ASTRA-sim che consente la simulazione accurata di carichi di lavoro di machine learning distribuito attraverso multipli datacenter geo-distribuiti. La nostra estensione introduce tre modifiche architetturali chiave per incorporare la consapevolezza delle WAN: (1) una tassonomia gerarchica di switch con tre tipi distinti, switch intra-datacenter che utilizzano inoltro cut-through con controllo della congestione PFC/DCQCN, switch di confine che implementano la traduzione di protocollo RDMA-to-IP, e switch WAN che impiegano politiche store-and-forward, (2) un'astrazione topologica a doppio livello che crea collegamenti virtuali tra datacenter aggregando le caratteristiche dei percorsi fisici WAN (larghezza di banda minima e latenza cumulativa) mantenendo al contempo l'effettivo instradamento hop-by-hop tramite ECMP, e (3) API di feedback di rete che forniscono visibilità in tempo reale dello stato di larghezza di banda e congestione agli algoritmi di comunicazione collettiva. Attraverso una validazione sperimentale sistematica, valutiamo le prestazioni di addestramento al variare delle larghezze di banda WAN, dei ritardi di propagazione, delle topologie dei datacenter, delle dimensioni dei chunk e dei meccanismi di controllo della congestione (solo PFC vs. DCQCN). Questo framework di simulazione migliorato consente di modellare l'addestramento distribuito in scenari inter-datacenter e permette di valutare e confrontare diverse politiche all'interno e all'esterno del datacenter, risultando in simulazioni più realistiche.
On extending ASTRA-sim framework to model WAN-aware distributed machine learning workloads
HASHEMIZADEH, HESSAM
2024/2025
Abstract
The growth in complexity of machine learning models, especially with the rise of Large Language Models (LLMs) reaching trillions of parameters, has changed the computation and communication requirements for distributed training. Current LLM training requires over 25,000 GPUs, overcoming the computational capacity of most existing datacenters, and forcing us to distribute training across multiple geo-distributed datacenters. The connectivity among these datacenters has to be achieved through Wide Area Networks (WANs), which significantly increases the complexity in communication between the GPUs, and introduces challenges in maintaining a high-quality connectivity. Open-source simulators play a crucial role in studying and optimizing these complex distributed training systems, as they enable researchers to evaluate different configurations and strategies without the expensive cost of running experiments on actual infrastructure. However, Current open-source simulators for distributed training (e.g., among the most influential, the ASTRA-SIM framework) are mainly designed for intra-datacenter environments with homogeneous, high-bandwidth interconnections. These simulators lack the capability to accurately model the hierarchical nature of inter-datacenter networks, which are characterized by heterogeneous bandwidth availability, higher latency, congestion, and other communication overheads. This is a critical gap in our ability to evaluate, simulate and optimize inter-datacenter training. In this thesis, we present an extension to ASTRA-sim framework that enables accurate simulation of distributed machine learning workloads across multiple geo-distributed datacenters. Our extension introduces three key architectural modifications to incorporate WAN awareness: (1) a hierarchical switch taxonomy with three distinct types, intra-datacenter switches using cut-through forwarding with PFC/DCQCN congestion control, border switches implementing RDMA-to-IP protocol translation, and WAN switches employing store-and-forward policies, (2) a dual-layer topology abstraction that creates virtual links between datacenters by aggregating physical WAN path characteristics (minimum bandwidth and cumulative latency) while maintaining actual hop-by-hop routing via ECMP, and (3) network feedback APIs providing real-time bandwidth and congestion state visibility to collective communication algorithms. Through systematic experimental validation, we evaluate training performance across varying WAN bandwidths, propagation delays, datacenter topologies, chunk sizes and congestion control mechanisms (PFC-only vs. DCQCN). This enhanced simulation framework enables modeling distributed training in inter-datacenter scenarios and allows to evaluate and compare different policies inside and outside of the datacenter, resulting in more realistic simulations.| File | Dimensione | Formato | |
|---|---|---|---|
|
2025_10_Hashemizadeh_Thesis.pdf
non accessibile
Descrizione: Thesis
Dimensione
1.44 MB
Formato
Adobe PDF
|
1.44 MB | Adobe PDF | Visualizza/Apri |
|
2025_10_Hashemizadeh_Executive_Summary.pdf
non accessibile
Descrizione: Executive Summary
Dimensione
613.89 kB
Formato
Adobe PDF
|
613.89 kB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/244012