This thesis presents the design and implementation of a modular MLOps pipeline for deploying machine learning models on edge devices, within the context of a complete \textit{IoT} platform. The work is motivated by the growing need to bring AI closer to the data sources, ensuring real-time inference, reduced latency, and autonomy in decentralized environments, while still maintaining the methodological rigor and automation standards of modern cloud-based MLOps frameworks. The proposed solution integrates the classical MLOps cycle---training, packaging, deployment, monitoring, and retraining---into a two-layer architecture composed of a central layer and a field (edge) layer. The central layer manages device orchestration, aggregated monitoring, and retraining with scalable cloud components, while the field layer focuses on lightweight, containerized model execution and local monitoring on heterogeneous devices. ONNX is adopted as a common exchange format to ensure interoperability and portability across models and platforms. The methodological approach involves the development of a Python package that abstracts the MLOps operations, offering customizable templates and interfaces to simplify the integration of new models while minimizing the effort required for adaptation to client-specific use cases. Experimental validation demonstrates the pipeline’s ability to deploy AI applications on constrained devices, while enabling monitoring, centralized aggregation, and retraining loops. The main contributions of this work are: - the adaptation of MLOps principles to edge computing scenarios; - the development of a reusable and customizable Python package supporting deployment on a proprietary\textit{IoT} platform; - the demonstration of seamless interoperability through ONNX-based model handling; - the proposal of future directions including online and federated learning for more adaptive and scalable edge intelligence. The results highlight the feasibility and relevance of extending industrial IoT platforms with AI capabilities, paving the way for more efficient and autonomous edge-based solutions.
La presente tesi descrive la progettazione e l’implementazione di una pipeline MLOps modulare per la distribuzione di modelli di machine learning su dispositivi edge, all’interno del contesto di una piattaforma \textit{IoT} completa. La motivazione del lavoro risiede nella crescente esigenza di avvicinare l’intelligenza artificiale alle sorgenti dei dati, garantendo inferenza in tempo reale, riduzione della latenza e autonomia in ambienti decentralizzati, mantenendo al contempo gli standard metodologici e di automazione tipici delle soluzioni MLOps in cloud. La soluzione proposta integra il ciclo MLOps classico---addestramento, packaging, deployment, monitoraggio e retraining---in un’architettura a due livelli, composta da uno strato centrale e uno di campo (edge). Lo strato centrale gestisce orchestrazione dei dispositivi, monitoraggio aggregato e retraining con componenti scalabili in cloud, mentre lo strato di campo si concentra sull’esecuzione leggera e containerizzata dei modelli e monitoraggio locale su dispositivi eterogenei. ONNX viene adottato come formato comune di scambio per garantire interoperabilità e portabilità tra modelli e piattaforme. L’approccio metodologico prevede lo sviluppo di un pacchetto Python che astrae le operazioni MLOps, offrendo template e interfacce personalizzabili per semplificare l’integrazione di nuovi modelli e ridurre lo sforzo di adattamento a casi d’uso specifici dei clienti. La validazione sperimentale dimostra la capacità della pipeline di distribuire applicazioni AI su dispositivi a risorse limitate, consentendo al contempo monitoraggio, aggregazione centralizzata e cicli di retraining. I principali contributi di questo lavoro sono: - l’adattamento dei principi MLOps a scenari edge; - lo sviluppo di un pacchetto Python riutilizzabile e personalizzabile per il deployment su una piattaforma \textit{IoT} proprietaria; - la dimostrazione dell’interoperabilità attraverso la gestione di modelli basata su ONNX; - la proposta di sviluppi futuri come l’online learning e il federated learning per un’intelligenza edge più adattiva e scalabile. I risultati evidenziano la fattibilità e la rilevanza di estendere piattaforme IoT industriali con capacità di AI, aprendo la strada a soluzioni edge più efficienti e autonome.
MLOps for Edge AI: continuous training, monitoring and deployment in decentralized environments
MANONI, LORENZO
2024/2025
Abstract
This thesis presents the design and implementation of a modular MLOps pipeline for deploying machine learning models on edge devices, within the context of a complete \textit{IoT} platform. The work is motivated by the growing need to bring AI closer to the data sources, ensuring real-time inference, reduced latency, and autonomy in decentralized environments, while still maintaining the methodological rigor and automation standards of modern cloud-based MLOps frameworks. The proposed solution integrates the classical MLOps cycle---training, packaging, deployment, monitoring, and retraining---into a two-layer architecture composed of a central layer and a field (edge) layer. The central layer manages device orchestration, aggregated monitoring, and retraining with scalable cloud components, while the field layer focuses on lightweight, containerized model execution and local monitoring on heterogeneous devices. ONNX is adopted as a common exchange format to ensure interoperability and portability across models and platforms. The methodological approach involves the development of a Python package that abstracts the MLOps operations, offering customizable templates and interfaces to simplify the integration of new models while minimizing the effort required for adaptation to client-specific use cases. Experimental validation demonstrates the pipeline’s ability to deploy AI applications on constrained devices, while enabling monitoring, centralized aggregation, and retraining loops. The main contributions of this work are: - the adaptation of MLOps principles to edge computing scenarios; - the development of a reusable and customizable Python package supporting deployment on a proprietary\textit{IoT} platform; - the demonstration of seamless interoperability through ONNX-based model handling; - the proposal of future directions including online and federated learning for more adaptive and scalable edge intelligence. The results highlight the feasibility and relevance of extending industrial IoT platforms with AI capabilities, paving the way for more efficient and autonomous edge-based solutions.| File | Dimensione | Formato | |
|---|---|---|---|
|
2025_10_Manoni_Tesi_01.pdf
non accessibile
Dimensione
7.11 MB
Formato
Adobe PDF
|
7.11 MB | Adobe PDF | Visualizza/Apri |
|
2025_10_Manoni_Executive_Summary_02.pdf
non accessibile
Dimensione
761.68 kB
Formato
Adobe PDF
|
761.68 kB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/243603