Tiny Machine Learning (TinyML) is the research field that aims to join the high representational power of machine learning solutions with the tight hardware constraints imposed by embedded/IoT devices. A few hundred kilobytes of RAM are available on tiny hardware, and the microcontroller unit's (MCU) clock rates are in the order of the KHz. Such scarce resources make deploying deep learning solutions such as feed-forward neural networks to the device challenging. Nevertheless, remarkable results have been achieved, for example, in keyword spotting. The usual pipeline for these solutions is to first train the algorithms on highly performant hardware. Then, convert the solution for on-device inference in a second step. On-device learning, instead, refers to the ability to train and adapt models directly on embedded and edge hardware. Currently, on-device learning is not supported by any commercial tinyML frameworks for non-specialized hardware. The academic research works in this field are still few and mainly focused on specific problems. This thesis develops a toolbox for on-device neural network training that can convert input models into an embedded format with training capabilities. The model architectures that the toolbox is targeting are dense feed-forward neural networks (FFNN) and convolutional neural networks (CNN). The solution has been tested on standard datasets for tasks of on-device training, transfer learning, and incremental learning after a concept drift.
Tiny Machine Learning (TinyML) è l'ambito di ricerca in cui si combinano le soluzioni di apprendimento automatico con i vincoli stringenti del hardware embedded/IoT. Pochi centinaia di kilobyte di RAM sono disponibili su hardware tiny e la frequenza di clock dei processori è nell'oridine dei KHz. Con risorse tanto limitate è una sfida far funzionare le soluzioni di apprendimento automatico, in particolare le reti neurali. Nonostante questo sono stati raggiunti notevoli risultati, ad esempio nella rilevazione di parole chiave in segnali audio (keyword spotting). Le soluzioni vengono progettate e allenate su cloud e successivamente convertite per operare su hardware tiny, tuttavia solo l'inferenza è supportata. L'allenamento on-device, invece, si riferisce all'abilità di adattare un modello direttamente sul hardware edge/embedded. L'allenamento on-device ha numerosi vantaggi tra cui risparmio di energia, riduzione della latenza e miglioramento della privacy. Attualmente l'allenamento on-device per hardware non specializzato non è supportato da alcun framework per TinyML. Anche la ricerca accademica in questo ambito è poca e focalizzata su singoli problemi specifici. Questa tesi sviluppa un insieme di strumenti per l'allenamento di reti neurali on-device su hardware tiny che può convertire modelli in formato embedded. Sono supportate reti neurali convoluzionali e reti neurali feed-forward. La soluzione è stata testata su dataset standard e nei contesti di transfer learning, incremental learning e concept drift.
TinyML on-device neural network training
Ostrovan, Eugeniu
2021/2022
Abstract
Tiny Machine Learning (TinyML) is the research field that aims to join the high representational power of machine learning solutions with the tight hardware constraints imposed by embedded/IoT devices. A few hundred kilobytes of RAM are available on tiny hardware, and the microcontroller unit's (MCU) clock rates are in the order of the KHz. Such scarce resources make deploying deep learning solutions such as feed-forward neural networks to the device challenging. Nevertheless, remarkable results have been achieved, for example, in keyword spotting. The usual pipeline for these solutions is to first train the algorithms on highly performant hardware. Then, convert the solution for on-device inference in a second step. On-device learning, instead, refers to the ability to train and adapt models directly on embedded and edge hardware. Currently, on-device learning is not supported by any commercial tinyML frameworks for non-specialized hardware. The academic research works in this field are still few and mainly focused on specific problems. This thesis develops a toolbox for on-device neural network training that can convert input models into an embedded format with training capabilities. The model architectures that the toolbox is targeting are dense feed-forward neural networks (FFNN) and convolutional neural networks (CNN). The solution has been tested on standard datasets for tasks of on-device training, transfer learning, and incremental learning after a concept drift.File | Dimensione | Formato | |
---|---|---|---|
TinyML_on_device_neural_network_training fin.pdf
accessibile in internet per tutti
Dimensione
1.43 MB
Formato
Adobe PDF
|
1.43 MB | Adobe PDF | Visualizza/Apri |
executive summary final.pdf
accessibile in internet per tutti
Dimensione
518.69 kB
Formato
Adobe PDF
|
518.69 kB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/187690