The increase on computational processing capabilities and affordability of data storage is opening up the range of possible applications to Machine Learning. Machine Learning (ML) is a category of research which focuses on learning from data generally collected from external devices, in order to build a model which is able to predict the correct output given a certain input. However, despite the always-increasing computing power and storage capabilities, modern technologies still struggle to keep the pace with the amount of the collected data, leading to network slowdowns when such information must be transferred to central servers for processing purposes. Furthermore, the collected data may contain sensitive information, putting the user privacy at risk when such a transfer is performed. To address these issues, Federated ML is widely gaining momentum. Federated learning (FL) is a ML technique whose purpose is to train an algorithm across multiple decentralized devices holding local data samples, without exchanging them. In FL, devices participating in the training retrieve the model from the server, train the model locally with local data and then communicate back to the server the parameters of the trained model. Then these local model updates are aggregated. The process is repeated a number of times until a satisfying accuracy is reached. However, FL also brings some drawbacks. For example, the communication of millions of model parameters introduces communication overhead. Another difference with respect to classic ML is the heterogeneity of data and the heterogeneity of devices capabilities and network conditions. The state of the art exhibits a high number of algorithms trying to tackle the mentioned problems. Unfortunately, the popular FL frameworks and simulators make it difficult to try out new algorithms and make experiments in an environment with the typical federated characteristics. This thesis has two main goals. The first regards the proposal of a FL simulator serving as starting point for researchers and developers operating in the field of federated learning. The proposed FL simulator does not require the presence of a real physical federated network because devices and servers are simulated. Easy customization is also one of the main objectives of the developed simulator: the devices and network characteristics are chosen by the user, together with the techniques to be used during the different phases of the federated learning process. Once the simulation is finished, it is possible to produce graphs and statistics of the results. The second goal of the thesis regards the proposal of a number of optimization techniques aiming to improve the simulation. In particular, we focused on the elaboration of client selection techniques which try to reduce time and consumption while also improving accuracy growth.
L'aumento delle capacità di elaborazione computazionale e archiviazione dei dati sta aprendo la gamma di possibili applicazioni al Machine Learning. Il Machine Learning (ML) è una categoria di ricerca che si concentra sull'apprendimento da dati generalmente raccolti da dispositivi esterni, al fine di costruire un modello in grado di prevedere l'output corretto dato un determinato input. Tuttavia, nonostante la potenza di calcolo e le capacità di archiviazione sempre crescenti, le moderne tecnologie faticano ancora a tenere il passo con la quantità di dati raccolti, portando a rallentamenti della rete quando tali informazioni devono essere trasferite ai server centrali per scopi di elaborazione. Inoltre, i dati raccolti possono contenere informazioni sensibili, mettendo a rischio la privacy dell'utente quando viene effettuato tale trasferimento. Per affrontare questi problemi, entra in gioco il Federated ML. L'apprendimento federato (FL) è una tecnica ML il cui scopo è addestrare un algoritmo su più dispositivi decentralizzati che contengono campioni di dati locali, senza scambiarli. In FL, i dispositivi che partecipano all'addestramento recuperano il modello dal server, addestrano il modello in locale con dati locali e quindi comunicano al server i parametri del modello addestrato. Dopodiché questi aggiornamenti del modello locale vengono aggregati. Il processo viene ripetuto più volte fino a raggiungere una precisione soddisfacente. Tuttavia, FL fa sorgere nuove problematiche. Ad esempio, la comunicazione di milioni di pesi del modello introduce un sovraccarico di comunicazione. Un'altra differenza rispetto al ML classico è l'eterogeneità dei dati e l'eterogeneità delle capacità dei dispositivi e delle condizioni della rete. Lo stato dell'arte mostra un numero elevato di algoritmi che cercano di affrontare i problemi menzionati. Sfortunatamente, i popolari framework e simulatori FL rendono difficile provare nuovi algoritmi e fare esperimenti in un ambiente con le tipiche caratteristiche federate. Questa tesi ha due obiettivi principali. La prima riguarda la proposta di un simulatore FL da utilizzare come punto di partenza per ricercatori e sviluppatori che operano nel campo dell'apprendimento federato. Il simulatore FL proposto non richiede la presenza di una vera e propria rete fisica federata in quanto vengono simulati dispositivi e server. Inoltre, la facile personalizzazione è uno degli obiettivi principali del simulatore sviluppato: i dispositivi e le caratteristiche della rete sono scelti dall'utente, insieme alle tecniche da utilizzare durante le diverse fasi del processo di apprendimento federato. Una volta terminata la simulazione, è possibile produrre grafici e statistiche dei risultati. Il secondo obiettivo della tesi riguarda la proposta di alcune tecniche di ottimizzazione volte a migliorare la simulazione. In particolare, ci siamo concentrati sull'elaborazione di tecniche di selezione dei clienti che cercano di ridurre tempi e consumi migliorando al contempo la crescita dell'accuratezza.
FELES : a federated learning simulator
CASAROLI, ARIANNA
2020/2021
Abstract
The increase on computational processing capabilities and affordability of data storage is opening up the range of possible applications to Machine Learning. Machine Learning (ML) is a category of research which focuses on learning from data generally collected from external devices, in order to build a model which is able to predict the correct output given a certain input. However, despite the always-increasing computing power and storage capabilities, modern technologies still struggle to keep the pace with the amount of the collected data, leading to network slowdowns when such information must be transferred to central servers for processing purposes. Furthermore, the collected data may contain sensitive information, putting the user privacy at risk when such a transfer is performed. To address these issues, Federated ML is widely gaining momentum. Federated learning (FL) is a ML technique whose purpose is to train an algorithm across multiple decentralized devices holding local data samples, without exchanging them. In FL, devices participating in the training retrieve the model from the server, train the model locally with local data and then communicate back to the server the parameters of the trained model. Then these local model updates are aggregated. The process is repeated a number of times until a satisfying accuracy is reached. However, FL also brings some drawbacks. For example, the communication of millions of model parameters introduces communication overhead. Another difference with respect to classic ML is the heterogeneity of data and the heterogeneity of devices capabilities and network conditions. The state of the art exhibits a high number of algorithms trying to tackle the mentioned problems. Unfortunately, the popular FL frameworks and simulators make it difficult to try out new algorithms and make experiments in an environment with the typical federated characteristics. This thesis has two main goals. The first regards the proposal of a FL simulator serving as starting point for researchers and developers operating in the field of federated learning. The proposed FL simulator does not require the presence of a real physical federated network because devices and servers are simulated. Easy customization is also one of the main objectives of the developed simulator: the devices and network characteristics are chosen by the user, together with the techniques to be used during the different phases of the federated learning process. Once the simulation is finished, it is possible to produce graphs and statistics of the results. The second goal of the thesis regards the proposal of a number of optimization techniques aiming to improve the simulation. In particular, we focused on the elaboration of client selection techniques which try to reduce time and consumption while also improving accuracy growth.File | Dimensione | Formato | |
---|---|---|---|
2022_04_Casaroli.pdf
accessibile in internet per tutti
Descrizione: Summary e Tesi di laurea magistrale
Dimensione
7.12 MB
Formato
Adobe PDF
|
7.12 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/185780