Deep Neural Networks (DNNs) have become a dominant paradigm in modern Artificial Intelligence, advancing fields such as computer vision and natural language processing. As mobile and embedded devices proliferate, inference workloads are increasingly shifting from the cloud to the edge. Yet the high computational and memory demands of DNNs make deployment on such platforms difficult, given their tight power, latency, and memory constraints. This emerging field, known as Edge AI, increasingly relies on specialized hardware such as Neural Processing Units (NPUs) and FPGA-based accelerators, supported by dedicated compiler toolchains. Fully exploiting these accelerators requires mapping schemes and compiler optimizations tailored to both the model and the device, a process that remains complex and time-consuming. This thesis tackles these challenges by exploring accelerator design, neural network mapping, and automated tuning methodologies for efficient deployment on constrained platforms. The first part of this thesis investigates architectural designs and mapping strategies for DNNs to reduce costly off-chip memory transfers. First, we propose a configurable fused-layer architecture for FPGA-based accelerators, showing that maximizing on-chip data reuse enables substantial bandwidth savings and performance gains. A predictive ML model is introduced to estimate latency and resource utilization, supporting efficient design space exploration. The work then extends to industrial NPUs in collaboration with STMicroelectronics: we analyze ST’s Neural-ART AI accelerator, focusing on its configurable architecture for multi-layer execution with internal data exchange; its compilation toolchain is integrated with TensorFlow Lite Micro to enable hybrid deployment, where compute-intensive layers are offloaded to the NPU while others run on the CPU. Finally, we investigate a prototype NPU with Digital In-Memory Computing (DIMC) SRAM, which reduces data movement by computing within the memory. We propose tailored mapping schemes for binary and mixed-precision networks on DIMC tiles to enhance efficiency in memory-bound workloads, and validate them on real-world use cases. The second part of the thesis focuses on tuning compiler-level optimizations and mapping strategies, since the efficiency of AI inference critically depends on how models are bound to compute and memory resources. Emphasis is placed on targets and runtimes that support concurrent multi-layer execution. First, a layer-wise optimization methodology is introduced to incrementally tune compilation parameters across convolutional layers, improving latency and memory footprint under resource constraints. This approach is then extended with local metaheuristics to increase search efficiency, combined with a sliding-window mechanism that concentrates co-optimization on layers likely to execute concurrently. Together, these methods enable compilers to identify high-quality mappings for flexible accelerators within practical exploration times.
Le DNNs si sono affermate come paradigma dominante nell’Intelligenza Artificiale moderna, con progressi in ambiti quali computer vision e elaborazione del linguaggio naturale. Con la diffusione di dispositivi mobili ed embedded, i carichi di inferenza si stanno spostando dal cloud verso l’edge. Tuttavia, gli elevati requisiti computazionali e di memoria delle DNNs rendono complessa la loro implementazione su piattaforme con stringenti vincoli di potenza, latenza e memoria. Questo settore emergente, noto come Edge AI, si affida sempre più a hardware specializzati come NPUs e acceleratori FPGA, supportati da toolchain di compilazione dedicate. Per sfruttarli appieno sono necessari schemi di mapping e ottimizzazioni specifiche nei compilatori, adattati sia al modello sia al dispositivo, un processo ancora complesso e dispendioso. La tesi affronta queste sfide analizzando architetture di acceleratori, strategie di mapping e metodologie di tuning automatico per il deployment efficiente su piattaforme vincolate. La prima parte del lavoro esplora soluzioni architetturali e mapping per ridurre i costosi trasferimenti verso memorie esterne. Si parte da un’architettura fused-layer configurabile per FPGA, dimostrando che il riuso dei dati on-chip consente notevoli risparmi di banda e incremento prestazionale. È introdotto un modello predittivo di ML per stimare latenza e utilizzo delle risorse, supportando l’esplorazione dello spazio di progetto. Il lavoro si estende poi a NPUs industriali in collaborazione con STMicroelectronics: l’acceleratore Neural-ART è analizzato nella sua architettura configurabile per esecuzione concorrente di più layer con scambio interno dei dati, e la relativa toolchain è integrata con TensorFlow Lite Micro per un deployment ibrido in cui i layer più intensivi sono offloaded sull’NPU. Infine, è studiato un prototipo di NPU con SRAM Digital In-Memory Computing (DIMC), che riduce i trasferimenti di dati computando direttamente in memoria. Vengono proposti schemi di mapping per reti binarie e a precisione mista, validati tramite casi d’uso reali. La seconda parte si concentra sul tuning delle ottimizzazioni e delle strategie di mapping a livello compilatore, dove l’efficienza dell’inferenza dipende da come i modelli sono associati a risorse di calcolo e memoria. L’attenzione è posta su piattaforme e runtime con esecuzione concorrente di più layer. Viene presentata una metodologia layer-wise per regolare progressivamente i parametri di compilazione dei layer convoluzionali, migliorando latenza e occupazione di memoria. L’approccio è poi esteso con metaeuristiche locali per aumentare l’efficienza della ricerca, combinate con un meccanismo a finestra scorrevole che concentra la co-ottimizzazione sui layer eseguiti in parallelo. Questi metodi consentono ai compilatori di individuare mapping di alta qualità per acceleratori flessibili entro tempi di esplorazione praticabili.
Mapping and tuning neural networks on hardware accelerators
Indirli, Fabrizio
2025/2026
Abstract
Deep Neural Networks (DNNs) have become a dominant paradigm in modern Artificial Intelligence, advancing fields such as computer vision and natural language processing. As mobile and embedded devices proliferate, inference workloads are increasingly shifting from the cloud to the edge. Yet the high computational and memory demands of DNNs make deployment on such platforms difficult, given their tight power, latency, and memory constraints. This emerging field, known as Edge AI, increasingly relies on specialized hardware such as Neural Processing Units (NPUs) and FPGA-based accelerators, supported by dedicated compiler toolchains. Fully exploiting these accelerators requires mapping schemes and compiler optimizations tailored to both the model and the device, a process that remains complex and time-consuming. This thesis tackles these challenges by exploring accelerator design, neural network mapping, and automated tuning methodologies for efficient deployment on constrained platforms. The first part of this thesis investigates architectural designs and mapping strategies for DNNs to reduce costly off-chip memory transfers. First, we propose a configurable fused-layer architecture for FPGA-based accelerators, showing that maximizing on-chip data reuse enables substantial bandwidth savings and performance gains. A predictive ML model is introduced to estimate latency and resource utilization, supporting efficient design space exploration. The work then extends to industrial NPUs in collaboration with STMicroelectronics: we analyze ST’s Neural-ART AI accelerator, focusing on its configurable architecture for multi-layer execution with internal data exchange; its compilation toolchain is integrated with TensorFlow Lite Micro to enable hybrid deployment, where compute-intensive layers are offloaded to the NPU while others run on the CPU. Finally, we investigate a prototype NPU with Digital In-Memory Computing (DIMC) SRAM, which reduces data movement by computing within the memory. We propose tailored mapping schemes for binary and mixed-precision networks on DIMC tiles to enhance efficiency in memory-bound workloads, and validate them on real-world use cases. The second part of the thesis focuses on tuning compiler-level optimizations and mapping strategies, since the efficiency of AI inference critically depends on how models are bound to compute and memory resources. Emphasis is placed on targets and runtimes that support concurrent multi-layer execution. First, a layer-wise optimization methodology is introduced to incrementally tune compilation parameters across convolutional layers, improving latency and memory footprint under resource constraints. This approach is then extended with local metaheuristics to increase search efficiency, combined with a sliding-window mechanism that concentrates co-optimization on layers likely to execute concurrently. Together, these methods enable compilers to identify high-quality mappings for flexible accelerators within practical exploration times.| File | Dimensione | Formato | |
|---|---|---|---|
|
PhD_thesis_Indirli_Rev2.pdf
non accessibile
Descrizione: thesis
Dimensione
7.55 MB
Formato
Adobe PDF
|
7.55 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/248538