The rapid advancements in deep learning, particularly Convolutional Neural Networks (CNNs), have highlighted the need for efficient hardware solutions capable of managing the significant memory and computation demands of these models. Processing-in-Memory (PIM) architectures based on Resistive Random-Access Memory (ReRAM) technology offer a promising alternative to traditional architectures by integrating computation directly within memory, thus minimizing data movement and enhancing efficiency. However, existing Neural Network (NN) compilers for ReRAM-based PIM systems often lack interoperability with widely used NN frameworks, leading to fragmented solutions and missed optimization opportunities. This thesis presents a novel compilation flow built upon the ONNX-MLIR framework, designed to seamlessly integrate with the broader NN ecosystem while targeting ReRAM-based PIM architectures. By introducing two custom MLIR dialects — one providing an abstract representation for a general ReRAM-based PIM architecture, and another tailored for a specific one — our compiler translates high-level ONNX models into optimized, hardware-specific executable code. Key contributions include efficient Matrix-Vector Multiplication (MVM) distribution, scheduling and aggregation for a reduction in data movement between cores, resulting in increased throughput and decreased energy consumption for CNNs inference on PIM architectures. We evaluate the proposed compiler against the state-of-the-art pimcomp compiler, using VGG-16 and GoogLeNet CNN models on a cycle-accurate simulator. Results demonstrate significant enhancements in energy efficiency for both models, particularly for GoogLeNet, a topologically complex CNN, that achieves a $37.78\times$ improvement, underscoring the flexibility and efficacy of the proposed compilation flow for complex modern CNNs. This work advances the field by offering a modular, extensible solution for compiling CNNs to ReRAM-based PIM architectures, laying the foundation for more flexible NN acceleration.
I rapidi progressi nel campo del deep learning, in particolare delle reti neurali convoluzionali (CNN), hanno evidenziato la necessità nuove di soluzioni hardware per gestire le notevoli richieste di memoria e di calcolo di questi modelli. Le architetture Processing-in-Memory (PIM), basate sulla tecnologia Resistive Random-Access Memory (ReRAM), offrono una alternativa promettente alle architetture convenzionali, integrando il calcolo direttamente nella memoria, riducendo al minimo il movimento dei dati. Tuttavia, i compilatori di reti neurali (NN) esistenti per questi sistemi soffrono spesso di poca interoperabilità con i framework NN più diffusi, con conseguente frammentazione delle soluzioni e perdendo opportunità di ottimizzazione. Questa tesi presenta un nuovo flusso di compilazione basato sul framework ONNX-MLIR, progettato per integrarsi facilmente con l'ecosistema NN esistente e al tempo stesso con le architetture PIM basate su ReRAM. Introducendo due dialetti MLIR personalizzati - uno che fornisce una rappresentazione astratta per un'architettura PIM generica basata su ReRAM, l'altro adattato a una determinata architettura - il nostro compilatore traduce modelli ONNX di alto livello in codice eseguibile ottimizzato e specifico per l'hardware. I contributi principali includono un'efficiente distribuzione, scheduling e aggregazione della moltiplicazione vettore-matrice (MVM) riducendo lo spostamento dati tra core, aumentando il throughput e riducendo il consumo energetico. Il compilatore proposto è stato valutato rispetto a pimcomp, compilatore allo stato dell'arte, utilizzando due modelli di CNN, VGG-16 e GoogLeNet, su un simulatore ciclo-accurato. I risultati mostrano miglioramenti significativi nell'efficienza energetica per entrambi i modelli, in particolare per GoogLeNet, CNN topologicamente complessa, che ottiene un miglioramento di $37,78\times$ volte, evidenziando flessibilità ed efficacia del flusso di compilazione proposto su CNN complesse. Questo lavoro avanza il campo offrendo una soluzione modulare ed estensibile per la compilazione di CNN su architetture PIM basate su ReRAM, gettando le fondamenta per un'accelerazione più flessibile delle CNN.
A novel MLIR-based compilation flow for accelerating convolutional neural networks on process in memory architectures
SOMAINI, ANDREA
2024/2025
Abstract
The rapid advancements in deep learning, particularly Convolutional Neural Networks (CNNs), have highlighted the need for efficient hardware solutions capable of managing the significant memory and computation demands of these models. Processing-in-Memory (PIM) architectures based on Resistive Random-Access Memory (ReRAM) technology offer a promising alternative to traditional architectures by integrating computation directly within memory, thus minimizing data movement and enhancing efficiency. However, existing Neural Network (NN) compilers for ReRAM-based PIM systems often lack interoperability with widely used NN frameworks, leading to fragmented solutions and missed optimization opportunities. This thesis presents a novel compilation flow built upon the ONNX-MLIR framework, designed to seamlessly integrate with the broader NN ecosystem while targeting ReRAM-based PIM architectures. By introducing two custom MLIR dialects — one providing an abstract representation for a general ReRAM-based PIM architecture, and another tailored for a specific one — our compiler translates high-level ONNX models into optimized, hardware-specific executable code. Key contributions include efficient Matrix-Vector Multiplication (MVM) distribution, scheduling and aggregation for a reduction in data movement between cores, resulting in increased throughput and decreased energy consumption for CNNs inference on PIM architectures. We evaluate the proposed compiler against the state-of-the-art pimcomp compiler, using VGG-16 and GoogLeNet CNN models on a cycle-accurate simulator. Results demonstrate significant enhancements in energy efficiency for both models, particularly for GoogLeNet, a topologically complex CNN, that achieves a $37.78\times$ improvement, underscoring the flexibility and efficacy of the proposed compilation flow for complex modern CNNs. This work advances the field by offering a modular, extensible solution for compiling CNNs to ReRAM-based PIM architectures, laying the foundation for more flexible NN acceleration.File | Dimensione | Formato | |
---|---|---|---|
pim_onnx_mlir.pdf
accessibile in internet per tutti a partire dal 14/11/2025
Descrizione: Complete Thesis
Dimensione
3.13 MB
Formato
Adobe PDF
|
3.13 MB | Adobe PDF | Visualizza/Apri |
pim_onnx_mlir_summary.pdf
accessibile in internet per tutti a partire dal 14/11/2025
Descrizione: Thesis Executive Summary
Dimensione
701.06 kB
Formato
Adobe PDF
|
701.06 kB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/230106