Artificial neural networks (ANNs) have achieved remarkable performance across a wide range of tasks, but their success has come at the expense of increasingly prohibitive energy and economic costs, particularly during training and inference on conventional hardware. Neuromorphic computing proposes a new paradigm in which brain-inspired spiking neurons, event-driven architectures, and specialized training algorithms can reduce power consumption by orders of magnitude. However, learning in spiking neural networks (SNNs) remains challenging due to the non-differentiable nature of spikes and the sensitivity of analog dynamics to noise and device variability. Reservoir computing offers a potential solution by fixing a large, randomly connected recurrent network (the reservoir) and training only a lightweight readout layer, thereby enabling extremely fast and stable learning. In this work, we systematically compare a spiking reservoir architecture to a conventional feedforward SNN on the temporal XOR task under varying levels of analog noise and temporal gap between inputs. Our results demonstrate that the reservoir computing approach not only achieves higher training speed—requiring only a single linear regression step—but also exhibits superior robustness to synaptic and input signal variability. By analyzing performance as a function of the temporal gap and noise amplitude, we highlight how reservoir dynamics provide an intrinsically stable computational substrate for temporal pattern recognition. These findings underline the promise of reservoir computing for scalable, low‐power neuromorphic systems and guide future hardware implementations for efficient edge intelligence.
Le reti neurali artificiali (ANN) hanno raggiunto prestazioni straordinarie in una vasta gamma di compiti, ma il loro successo si è tradotto in costi energetici ed economici sempre più proibitivi, soprattutto durante la fase di addestramento e di inferenza su hardware convenzionale. Il computing neuromorfico propone un nuovo paradigma in cui neuroni spiking ispirati al cervello, architetture event-driven e algoritmi di apprendimento specializzati possono ridurre il consumo energetico di ordini di grandezza. Tuttavia, l’apprendimento nelle reti neurali spiking (SNN) rimane complesso a causa della natura non differenziabile degli spike e della sensibilità delle dinamiche analogiche del rumore e alla variabilità dei dispositivi. Il reservoir computing offre una potenziale soluzione: si mantiene fisso un ampio network ricorrente con connessioni casuali (il reservoir) e si addestra solamente uno strato di readout, permettendo così un apprendimento estremamente rapido e stabile. In questo lavoro, confrontiamo sistematicamente un’architettura di reservoir spiking con una SNN feedforward convenzionale sul task XOR temporale, variando il livello di rumore analogico e il gap temporale tra gli input. I risultati dimostrano che l’approccio di reservoir computing non solo consente una velocità di addestramento superiore—richiedendo un’unico step di regressione lineare—ma mostra anche una robustezza migliore alla variabilità sinaptica e del segnale in ingresso. Analizzando le prestazioni in funzione del gap temporale e dell’ampiezza del rumore, evidenziamo come le dinamiche del reservoir forniscano un substrato computazionale intrinsecamente stabile per il riconoscimento di pattern temporali. Questi risultati sottolineano le potenzialità del reservoir computing per sistemi neuromorfici scalabili e a basso consumo e indirizzano le implementazioni hardware future verso un’intelligenza edge efficiente.
Comparative robustness of spiking reservoirs and feedforward networks on temporal XOR
Kumria, Mikel
2024/2025
Abstract
Artificial neural networks (ANNs) have achieved remarkable performance across a wide range of tasks, but their success has come at the expense of increasingly prohibitive energy and economic costs, particularly during training and inference on conventional hardware. Neuromorphic computing proposes a new paradigm in which brain-inspired spiking neurons, event-driven architectures, and specialized training algorithms can reduce power consumption by orders of magnitude. However, learning in spiking neural networks (SNNs) remains challenging due to the non-differentiable nature of spikes and the sensitivity of analog dynamics to noise and device variability. Reservoir computing offers a potential solution by fixing a large, randomly connected recurrent network (the reservoir) and training only a lightweight readout layer, thereby enabling extremely fast and stable learning. In this work, we systematically compare a spiking reservoir architecture to a conventional feedforward SNN on the temporal XOR task under varying levels of analog noise and temporal gap between inputs. Our results demonstrate that the reservoir computing approach not only achieves higher training speed—requiring only a single linear regression step—but also exhibits superior robustness to synaptic and input signal variability. By analyzing performance as a function of the temporal gap and noise amplitude, we highlight how reservoir dynamics provide an intrinsically stable computational substrate for temporal pattern recognition. These findings underline the promise of reservoir computing for scalable, low‐power neuromorphic systems and guide future hardware implementations for efficient edge intelligence.| File | Dimensione | Formato | |
|---|---|---|---|
|
2025_07_Kumria_Tesi_01.pdf
accessibile in internet solo dagli utenti autorizzati
Descrizione: MSc Thesis
Dimensione
14.09 MB
Formato
Adobe PDF
|
14.09 MB | Adobe PDF | Visualizza/Apri |
|
2025_07_Kumria_Executive_Summary_02.pdf
accessibile in internet solo dagli utenti autorizzati
Descrizione: Executive Summary
Dimensione
3.04 MB
Formato
Adobe PDF
|
3.04 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/240391