Tiny Machine Learning (TinyML), a promising and rapidly evolving field, aims to deploy compressed and optimised machine learning (ML) models on small, low-power devices such as battery-powered microcontrollers and embedded systems. In a world now dominated by devices constrained in terms of memory, computation, and energy power, TinyML has become almost essential to meet the demands of this evolving landscape. Of the many existing techniques to enable ML capabilities on these resource-constrained devices, this thesis focuses on quantization. This research investigates the impact of various standardized quantization schemes on the performance and memory footprint of neural networks tailored for image classification tasks. Seven distinct quantization schemes, categorized into mixed-precision and uniform-precision approaches, are analyzed across different neural network architectures, including MobileNetV1, ResNet-50, Xception, VGG- 16, and three custom-built convolutional neural networks (CNNs) with increasing depth. The proposed mixed-precision quantization schemes (Top-Down, Bottom- Up, Progressive-Depth, Input-Output) involve partitioning a neural network into sections, each quantized with different bit widths. This approach aims to study how varying precisions in different parts of the network affect performance and the overall memory footprint. On the other hand, the proposed uniform-precision quantization schemes (8-bit, 4-bit, 2-bit) seek to investigate the impact of reducing bit widths on the final results. The experiments utilize the CIFAR-10 dataset to evaluate the effectiveness of Quantization Aware Training (QAT) through the QKeras library. The results demonstrate that all the proposed schemes except the impractical uniform 2-bit quantization, generally achieve a significant reduction in memory footprint with minimal performance loss, highlighting the potential of variable quantization techniques.
Tiny Machine Learning (TinyML), un campo di studio promettente e in rapida evoluzione, mira a implementare modelli di machine learning (ML) compressi e ottimizzati su dispositivi piccoli e a basso consumo energetico come microcontrollori alimentati a batteria e sistemi embedded. In un mondo ora dominato da dispositivi limitati in termini di memoria, potenza di calcolo ed energia, il TinyML è diventato quasi essenziale per soddisfare le esigenze di questo scenario in evoluzione. Tra le molte tecniche esistenti per abilitare capacità di ML su questi dispositivi con risorse limitate, questa tesi si concentra sulla quantizzazione. Questa ricerca indaga l’impatto di vari schemi di quantizzazione standardizzati sulle prestazioni e sulla dimensione della memoria delle reti neurali progettate per compiti di classificazione delle immagini. Sette schemi di quantizzazione, suddivisi in approcci a precisione mista e a precisione uniforme, sono analizzati su diverse architetture di reti neurali, tra cui MobileNetV1, ResNet-50, Xception, VGG-16 e tre reti neurali convoluzionali (CNN) personalizzate con profondità crescente. Gli schemi di quantizzazione a precisione mista proposti (Top- Down, Bottom-Up, Progressive-Depth, Input-Output) prevedono la suddivisione di una rete neurale in sezioni, ciascuna quantizzata con diverse larghezze di bit. Questo approccio mira a studiare come varie precisioni nelle diverse parti della rete influenzino le prestazioni e la dimensione complessiva della memoria. D’altra parte, gli schemi di quantizzazione a precisione uniforme proposti (8-bit, 4-bit, 2-bit) mirano a investigare l’impatto della riduzione della larghezza di bit sui risultati finali. Gli esperimenti utilizzano il dataset CIFAR-10 per valutare l’efficacia del Quantization Aware Training (QAT) tramite la libreria QKeras. I risultati dimostrano che tutti gli schemi proposti, ad eccezione della quantizzazione uniforme a 2-bit, raggiungono generalmente una significativa riduzione della dimensione della memoria con una perdita minima delle prestazioni, evidenziando il potenziale delle tecniche di quantizzazione.
Optimizing neural networks for TinyML: a study on quantization schemes
Moriello, Claudio
2023/2024
Abstract
Tiny Machine Learning (TinyML), a promising and rapidly evolving field, aims to deploy compressed and optimised machine learning (ML) models on small, low-power devices such as battery-powered microcontrollers and embedded systems. In a world now dominated by devices constrained in terms of memory, computation, and energy power, TinyML has become almost essential to meet the demands of this evolving landscape. Of the many existing techniques to enable ML capabilities on these resource-constrained devices, this thesis focuses on quantization. This research investigates the impact of various standardized quantization schemes on the performance and memory footprint of neural networks tailored for image classification tasks. Seven distinct quantization schemes, categorized into mixed-precision and uniform-precision approaches, are analyzed across different neural network architectures, including MobileNetV1, ResNet-50, Xception, VGG- 16, and three custom-built convolutional neural networks (CNNs) with increasing depth. The proposed mixed-precision quantization schemes (Top-Down, Bottom- Up, Progressive-Depth, Input-Output) involve partitioning a neural network into sections, each quantized with different bit widths. This approach aims to study how varying precisions in different parts of the network affect performance and the overall memory footprint. On the other hand, the proposed uniform-precision quantization schemes (8-bit, 4-bit, 2-bit) seek to investigate the impact of reducing bit widths on the final results. The experiments utilize the CIFAR-10 dataset to evaluate the effectiveness of Quantization Aware Training (QAT) through the QKeras library. The results demonstrate that all the proposed schemes except the impractical uniform 2-bit quantization, generally achieve a significant reduction in memory footprint with minimal performance loss, highlighting the potential of variable quantization techniques.File | Dimensione | Formato | |
---|---|---|---|
2024_07_Moriello.pdf
accessibile in internet per tutti
Descrizione: Testo della tesi
Dimensione
3.38 MB
Formato
Adobe PDF
|
3.38 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/222539