The exponential growth of connected devices has strained centralized cloud infrastructures, motivating Multi-access Edge Computing (MEC) as a distributed paradigm that deploys computational resources closer to users. A fundamental challenge in MEC is task offloading: deciding whether to execute tasks locally on resource-constrained devices or migrate them to edge servers or cloud infrastructure. Traditional approaches include heuristic methods that provide fast but rigid decisions, game-theoretic formulations requiring complete system information, and machine learning methods that rely on centralized training, contradicting MEC’s decentralized architecture and raising privacy concerns. This thesis investigates Federated Learning (FL) as a privacy-preserving decision-making algorithm for task offloading in MEC environments. FL enables distributed model training by exchanging only model parameters rather than raw data, addressing privacy constraints while leveraging collective learning across heterogeneous edge nodes. We develop an event-driven simulator integrating FL training with task processing and systematically evaluate FL-based policies against heuristic baselines across network scales, workload heterogeneity, load conditions, and network quality levels. Results demonstrate that FL discovers near-optimal placement strategies through distributed experience, with advantages scaling progressively with network size and complexity. FL excels in large heterogeneous networks, mixed workloads, and under degraded network conditions where learned network-agnostic placement strategies outperform heuristics designed for stable connectivity. The approach achieves higher task success rates and lower latency while maintaining task-aware load distribution. However, FL incurs energy overhead from periodic training and requires convergence time, trade-offs that are acceptable in scenarios favoring robustness and adaptability over immediate optimality.
La crescita esponenziale dei dispositivi connessi ha sovraccaricato le infrastrutture cloud centralizzate, motivando il Multi-access Edge Computing (MEC) come paradigma distribuito che distribuisce risorse computazionali vicino agli utenti. Una sfida fondamentale in MEC è l’offloading dei task: decidere se eseguire compiti localmente su dispositivi con risorse limitate o migrarli su server edge o infrastrutture cloud. Gli approcci tradizionali includono metodi euristici che forniscono decisioni rapide ma rigide, formulazioni game-theoretic che richiedono informazioni complete del sistema, e metodi di machine learning che si basano su training centralizzato, contraddicendo l’architettura decentralizzata di MEC e sollevando preoccupazioni sulla privacy. Questa tesi investiga il Federated Learning (FL) come algoritmo decisionale privacy-preserving per l’offloading dei task in ambienti MEC. FL abilita il training distribuito scambiando solo parametri del modello invece di dati grezzi, affrontando vincoli di privacy mentre sfrutta l’apprendimento collettivo. I risultati dimostrano che FL scopre strategie di placement quasi-ottimali, con vantaggi che scalano progressivamente con la dimensione della rete. FL eccelle in reti eterogenee di grandi dimensioni e in condizioni di rete degradate, raggiungendo tassi di successo più elevati e latenza inferiore, pur sostenendo un overhead energetico da training periodico.
A federated learning approach to task offloading in MEC environments
BARZAN, DAVIDE
2024/2025
Abstract
The exponential growth of connected devices has strained centralized cloud infrastructures, motivating Multi-access Edge Computing (MEC) as a distributed paradigm that deploys computational resources closer to users. A fundamental challenge in MEC is task offloading: deciding whether to execute tasks locally on resource-constrained devices or migrate them to edge servers or cloud infrastructure. Traditional approaches include heuristic methods that provide fast but rigid decisions, game-theoretic formulations requiring complete system information, and machine learning methods that rely on centralized training, contradicting MEC’s decentralized architecture and raising privacy concerns. This thesis investigates Federated Learning (FL) as a privacy-preserving decision-making algorithm for task offloading in MEC environments. FL enables distributed model training by exchanging only model parameters rather than raw data, addressing privacy constraints while leveraging collective learning across heterogeneous edge nodes. We develop an event-driven simulator integrating FL training with task processing and systematically evaluate FL-based policies against heuristic baselines across network scales, workload heterogeneity, load conditions, and network quality levels. Results demonstrate that FL discovers near-optimal placement strategies through distributed experience, with advantages scaling progressively with network size and complexity. FL excels in large heterogeneous networks, mixed workloads, and under degraded network conditions where learned network-agnostic placement strategies outperform heuristics designed for stable connectivity. The approach achieves higher task success rates and lower latency while maintaining task-aware load distribution. However, FL incurs energy overhead from periodic training and requires convergence time, trade-offs that are acceptable in scenarios favoring robustness and adaptability over immediate optimality.| File | Dimensione | Formato | |
|---|---|---|---|
|
2025_12_Barzan_Executive_Summary.pdf
non accessibile
Descrizione: Executive Summary
Dimensione
356.2 kB
Formato
Adobe PDF
|
356.2 kB | Adobe PDF | Visualizza/Apri |
|
2025_12_Barzan_Tesi.pdf
non accessibile
Descrizione: Tesi
Dimensione
6.83 MB
Formato
Adobe PDF
|
6.83 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/246742