Nowadays, Deep Learning (DL) and Artificial Intelligence (AI) are ubiquitous. They effectively apply to a wide range of products and solutions in largely heterogeneous fields, spanning from healthcare to, e.g., predictive maintenance and other industrial applications. At the same time, these years witness an accelerated migration towards distributed computing: Internet of Things devices as smart watches, smart city power grids, connected vehicles, smart homes generate huge volumes of data, whose processing requires mobility support, geo-distribution and, usually, effective strategies to reduce the operational latency. Privacy constraints, moreover, limit the possibility of analysing data in the public Cloud, favouring the choice of on-premises resources or Edge devices with partial connectivity. Large Cloud datacenters, featuring powerful Virtual Machines often accelerated with GPUs, are exploited to process resource-demanding tasks, which benefit from the possibility of accessing ideally unlimited computational and storage resources according to pay-to-go pricing models. The effective management of such a complex environment, including heterogeneous Edge-to-Cloud resources interconnected in a so-called Computing Continuum, poses great challenges. The execution of AI inference workflows including diverse components needs to be optimised at design time, selecting the most appropriate resources from the available computational layers, and then adapted at runtime, since fluctuating workload and environment conditions may determine sub-optimal scenarios where resources are saturated or underutilised. Similarly, DL training applications executed in private or public Cloud clusters are to be effectively scheduled on suitable GPU-accelerated nodes to minimize the expected costs and meet the user-imposed due dates. This dissertation addresses these three problems from different perspectives, proposing mathematical models and heuristic or Reinforcement Learning-based methods for the efficient and effective management of AI inference applications, both at design time and runtime, and DL training jobs. Furthermore, it discusses techniques to develop analytical and Machine Learning-based performance models, crucial to accurately predict the applications response times on heterogeneous resources. The proposed tools are validated through experimental campaigns in both simulated and prototype environments, proving their effectiveness and applicability to practical scenarios.
Al giorno d’oggi, Deep Learning (DL) ed Intelligenza Artificiale (AI) sono onnipresenti. Si declinano efficacemente in un'ampia gamma di applicazioni in campi eterogenei, dalla sanità alla manutenzione predittiva e ad altre applicazioni industriali. Allo stesso tempo, in questi anni assistiamo ad una migrazione accelerata verso il calcolo distribuito: i dispositivi dell’Internet of Things come smartwatch, città intelligenti, veicoli interconnessi, case intelligenti generano enormi volumi di dati, la cui elaborazione richiede supporto alla mobilità, geo-distribuzione e, spesso, strategie efficaci per ridurre la latenza operativa. I vincoli di privacy, inoltre, limitano la possibilità di analizzare i dati nel Cloud pubblico, privilegiando la scelta di risorse on-premises o di dispositivi Edge con connettività parziale. I grandi data center Cloud, dotati di potenti macchine virtuali spesso accelerate con GPU, vengono sfruttati per supportare applicazioni che beneficiano della possibilità di accedere a risorse computazionali e di storage idealmente illimitate secondo modelli di prezzo pay-to-go. La gestione efficace di un ambiente così complesso, comprendente risorse eterogenee Edge-to-Cloud, interconnesse in un cosiddetto Computing Continuum, pone grandi sfide. L'esecuzione dei flussi di lavoro di inferenza dell'intelligenza artificiale che includono diversi componenti deve essere ottimizzata in fase di progettazione, selezionando le risorse più appropriate, e quindi adattata in fase di esecuzione, poiché un carico di lavoro e condizioni esterne variabili possono creare scenari non ottimali in cui le risorse sono sature o sottoutilizzato. Allo stesso modo, il training di applicazioni DL eseguite in cluster Cloud privati o pubblici devono essere pianificate in modo efficace su nodi idonei accelerati con GPU, per ridurre al minimo i costi e rispettare le scadenze imposte dall'utente. Questa tesi affronta questi tre problemi da diverse prospettive, proponendo modelli matematici e metodi euristici o basati sul Reinforcement Learning per la gestione efficiente ed efficace delle applicazioni di inferenza dell'intelligenza artificiale, sia in fase di progettazione che di runtime, e il training di applicazioni DL. Inoltre, vengono discusse tecniche per sviluppare modelli prestazionali analitici e basati sul Machine Learning, cruciali per prevedere con precisione i tempi di risposta delle applicazioni su risorse eterogenee. Le soluzioni proposte vengono validati attraverso campagne sperimentali sia in ambienti simulati che prototipali, dimostrandone l'efficacia e l'applicabilità a scenari pratici.
Resource allocation and scheduling problems in computing continua for artificial intelligence applications
FILIPPINI, FEDERICA
2023/2024
Abstract
Nowadays, Deep Learning (DL) and Artificial Intelligence (AI) are ubiquitous. They effectively apply to a wide range of products and solutions in largely heterogeneous fields, spanning from healthcare to, e.g., predictive maintenance and other industrial applications. At the same time, these years witness an accelerated migration towards distributed computing: Internet of Things devices as smart watches, smart city power grids, connected vehicles, smart homes generate huge volumes of data, whose processing requires mobility support, geo-distribution and, usually, effective strategies to reduce the operational latency. Privacy constraints, moreover, limit the possibility of analysing data in the public Cloud, favouring the choice of on-premises resources or Edge devices with partial connectivity. Large Cloud datacenters, featuring powerful Virtual Machines often accelerated with GPUs, are exploited to process resource-demanding tasks, which benefit from the possibility of accessing ideally unlimited computational and storage resources according to pay-to-go pricing models. The effective management of such a complex environment, including heterogeneous Edge-to-Cloud resources interconnected in a so-called Computing Continuum, poses great challenges. The execution of AI inference workflows including diverse components needs to be optimised at design time, selecting the most appropriate resources from the available computational layers, and then adapted at runtime, since fluctuating workload and environment conditions may determine sub-optimal scenarios where resources are saturated or underutilised. Similarly, DL training applications executed in private or public Cloud clusters are to be effectively scheduled on suitable GPU-accelerated nodes to minimize the expected costs and meet the user-imposed due dates. This dissertation addresses these three problems from different perspectives, proposing mathematical models and heuristic or Reinforcement Learning-based methods for the efficient and effective management of AI inference applications, both at design time and runtime, and DL training jobs. Furthermore, it discusses techniques to develop analytical and Machine Learning-based performance models, crucial to accurately predict the applications response times on heterogeneous resources. The proposed tools are validated through experimental campaigns in both simulated and prototype environments, proving their effectiveness and applicability to practical scenarios.File | Dimensione | Formato | |
---|---|---|---|
2024_02_Filippini.pdf
accessibile in internet per tutti
Descrizione: PhD Dissertation
Dimensione
44.55 MB
Formato
Adobe PDF
|
44.55 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/216813