The prevalence of complex software has surged in modern society, both in terms of strategic reliance and economic relevance. Simultaneously, the hardware required to support such software has grown in size, energy consumption, performance, and complexity. Notable examples of advanced architectures and paradigms for modern software are cloud computing, edge computing, and High-Performance Computing (HPC). A crucial aspect of executing complex software on machines with sophisticated hardware architectures involves selecting the appropriate configuration concerning software parameters and available hardware resources. Poor choices in these settings can result in noticeable performance reduction or substantial additional expenses. However, the software and hardware complexity poses challenges for fine-tuning these applications or predicting their performance based on input settings -- tasks for which traditional white-box models are inadequate. In this dissertation, we introduce several data-driven, black-box techniques for the performance modeling and the resource-constrained optimization of cloud computing, edge computing, and HPC systems. Black-box techniques are ideally suited to tackle these complex systems, as they do not require information about the internal workings of the system under study. The foundational elements of the proposed techniques are Bayesian Optimization (BO) and Machine Learning (ML). BO is one of the leading state-of-the-art techniques for black-box optimization of complex Information and Communication Technology (ICT) systems, and ML models can provide the required performance predictions with reliable accuracy. Both can infer the relationship between software/hardware input settings and output metrics of applications without needing any internal information. In particular, this dissertation presents several tools to model application performance via ML, including (i) the open-source aMLLibrary for automatic parallel training and feature/hyperparameter selection, (ii) its integration in the OSCAR-P auto-profiling framework for Function-as-a-Service (FaaS)-based applications running in the edge computing continuum, and (iii) a framework for Graph Neural Network (GNN)-based thermal prediction in HPC centers. We also propose several algorithms integrating BO and ML models to optimize the execution of cloud and HPC applications while respecting resource or time constraints: they are the MALIBOO sequential algorithm and three parallel optimization extensions for HPC settings. We validate our techniques in extensive experimental campaigns stretching several ICT domains, including but not limited to cloud-based big data applications, edge computing applications, neural networks, and molecular simulation software. Our trained ML models often achieve minimal error metrics, and our BO-based optimization techniques consistently outperform state-of-the-art solutions in finding optimal configurations and reducing the number of executions that do not respect constraints.
La pervasività di software complessi è aumentata nella società contemporanea, sia in termini di dipendenza strategica che di rilevanza economica. Parallelamente, l'hardware necessario per supportare tali software è cresciuto in dimensioni, consumo energetico, prestazioni e complessità. Alcuni esempi notevoli di architetture avanzate e paradigmi per il software contemporaneo sono il cloud computing, l'edge computing e l'High-Performance Computing (HPC). Un aspetto cruciale dell'esecuzione di software complessi su macchine con architetture hardware sofisticate riguarda la selezione della configurazione appropriata, composta da parametri software e risorse hardware. Scelte sbagliate in queste impostazioni possono comportare una riduzione significativa delle prestazioni o spese aggiuntive sostanziali. Tuttavia, la complessità del software e dell'hardware pone sfide per il fine-tuning di queste applicazioni o la previsione delle loro prestazioni basate sulle impostazioni di input, compiti per i quali i modelli white-box tradizionali sono inadeguati. In questa tesi, presentiamo diverse tecniche data-driven e black-box per la modellazione delle prestazioni e l'ottimizzazione vincolata dalle risorse dei sistemi di cloud computing, edge computing e HPC. Le tecniche black-box sono ideali per affrontare questi sistemi complessi, poiché non richiedono informazioni sul funzionamento interno del sistema d'interesse. I due elementi fondamentali delle tecniche proposte sono l'Ottimizzazione Bayesiana (BO) e il Machine Learning (ML). La BO è una delle principali tecniche dallo stato dell'arte per l'ottimizzazione black-box di sistemi complessi di Tecnologie dell'Informazione e della Comunicazione (ICT), e i modelli di ML possono fornire le necessarie previsioni di prestazione con un'elevata affidabilità. Entrambe possono dedurre la relazione tra le impostazioni di input software/hardware e le metriche di output delle applicazioni senza bisogno di alcuna informazione interna. In particolare, questa tesi presenta diversi strumenti per modellare le prestazioni delle applicazioni tramite ML, inclusi (i) la libreria open-source aMLLibrary per l'addestramento parallelo automatico e la selezione degli iparametri, (ii) la sua integrazione nel framework di auto-profiling OSCAR-P per applicazioni basate sul paradigma Function-as-a-Service (FaaS) eseguite nel continuum edge computing, e (iii) un framework per la previsione termica basata su reti neurali a grafo (GNN) nei centri HPC. Proponiamo anche diversi algoritmi che integrano BO e modelli ML per ottimizzare l'esecuzione delle applicazioni cloud e HPC rispettando i vincoli di risorse o tempo: sono l'algoritmo sequenziale MALIBOO e le sue tre estensioni parallele per ambienti HPC. Testiamo l'efficacia delle nostre tecniche in ampie campagne sperimentali che coprono diversi domini ICT, per esempio applicazioni di big data su cloud, applicazioni di edge computing, reti neurali e software di simulazione molecolare. I nostri modelli di ML spesso raggiungono metriche di errore minime, e le nostre tecniche di ottimizzazione basate sulla BO superano consistentemente altre tecniche dallo stato dell'arte nella ricerca di configurazioni ottimali e nella riduzione il numero di esecuzioni che non rispettano i vincoli.
Data-driven approaches for managing resources of cloud, edge, and high-performance computing systems
GUINDANI, BRUNO
2023/2024
Abstract
The prevalence of complex software has surged in modern society, both in terms of strategic reliance and economic relevance. Simultaneously, the hardware required to support such software has grown in size, energy consumption, performance, and complexity. Notable examples of advanced architectures and paradigms for modern software are cloud computing, edge computing, and High-Performance Computing (HPC). A crucial aspect of executing complex software on machines with sophisticated hardware architectures involves selecting the appropriate configuration concerning software parameters and available hardware resources. Poor choices in these settings can result in noticeable performance reduction or substantial additional expenses. However, the software and hardware complexity poses challenges for fine-tuning these applications or predicting their performance based on input settings -- tasks for which traditional white-box models are inadequate. In this dissertation, we introduce several data-driven, black-box techniques for the performance modeling and the resource-constrained optimization of cloud computing, edge computing, and HPC systems. Black-box techniques are ideally suited to tackle these complex systems, as they do not require information about the internal workings of the system under study. The foundational elements of the proposed techniques are Bayesian Optimization (BO) and Machine Learning (ML). BO is one of the leading state-of-the-art techniques for black-box optimization of complex Information and Communication Technology (ICT) systems, and ML models can provide the required performance predictions with reliable accuracy. Both can infer the relationship between software/hardware input settings and output metrics of applications without needing any internal information. In particular, this dissertation presents several tools to model application performance via ML, including (i) the open-source aMLLibrary for automatic parallel training and feature/hyperparameter selection, (ii) its integration in the OSCAR-P auto-profiling framework for Function-as-a-Service (FaaS)-based applications running in the edge computing continuum, and (iii) a framework for Graph Neural Network (GNN)-based thermal prediction in HPC centers. We also propose several algorithms integrating BO and ML models to optimize the execution of cloud and HPC applications while respecting resource or time constraints: they are the MALIBOO sequential algorithm and three parallel optimization extensions for HPC settings. We validate our techniques in extensive experimental campaigns stretching several ICT domains, including but not limited to cloud-based big data applications, edge computing applications, neural networks, and molecular simulation software. Our trained ML models often achieve minimal error metrics, and our BO-based optimization techniques consistently outperform state-of-the-art solutions in finding optimal configurations and reducing the number of executions that do not respect constraints.File | Dimensione | Formato | |
---|---|---|---|
Guindani_PhD_Thesis.pdf
accessibile in internet per tutti
Descrizione: Thesis PDF
Dimensione
12.77 MB
Formato
Adobe PDF
|
12.77 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/224613