The continous evolution of market applications and technological devices have posed new challenges to hardware manufacturers, in order to meet ever-increasing low-power and performance requirements. The Network-on-Chip (NoC) emerged as a de-facto on-chip interconnect to meet scalability and flexibility. The literature reports several optimization methodologies to cope with such a tradeoff ranging from the router architecture changes to priority-based packet scheduling techniques and optimized NoC topologies. In this scenario, data compression techinques represents a viable solution to reduce the injected traffic thus reducing the required bandwidth, the energy profile and increasing the system-wide performance. Previous analysis on different workloads highlighted the fact that the majority of the network traffic is due to data transfers. Starting from this assumptions, we propose an analysis of the impact that end-to-end (E2E) compression scheme has on the NoC. This thesis investigates the power-performance tradeoff offered by the E2E compression in NoCs with particular emphasis on the power and performance overheads as well as the achievable compression ratio with respect to the optimal compression ratio from a golden model. We then present a qualitative analysis of the most critical aspects in E2E compression to cope with, in order to improve performance and power consumption, and a quantitative analysis that explores the energy and latency overheads introduced by such compression scheme. We will also discuss the impossibility of E2E compression to improve the system perfomance and finally analize the benefits deriving from the utilization of more compression mechanisms in parallel, compared to single-compression-algorithm schemes. Results have been collected using a 16-core architecture running a subset of the SpecCPU2006 benchmark suite. Results shows that E2E compression achieve 26% energy savings with almost the same performance of the baseline NoC. Its parallel version increase the traffic compressibility by a 5.7%, and increase its stability over time by 5% on average, compared to single-compression-algorithm schemes.
La continua evoluzione del mercato tecnologico ha posto nuove sfide ai produttori hardware per poter sottostare alle sempre più crescenti richieste di bassi consumi e migliori performance. La Network-on-Chip (NoC) si è imposta come l'interconnessione on-chip di riferimento, in grado ottenere migliore scalabilità e flessibilità. Il suo impatto sulle performance e sui consumi energetici però è un aspetto da non tralasciare. La letteratura riporta diverse metodologie per investigare questo tradeoff, dalla modifica all'architettura dei router, scheduling prioritizzato dei pacchetti, utilizzo di topologie ottimizzate. In questo scenario, la compressione dati risulta una soluzione pratica per ridurre l'iniezione di traffico, riducendo quindi la banda di canale richiesta, il profilo energetico e incrementando le performance del sistema. Partendo da queste assunzioni, noi proponiamo un'analisi dell'impatto che la compressione end-to-end (E2E) introduce sulla NoC. Questa tesi investiga il tradeoff potenza-prestazioni offerto dalla compressione E2E sulla NoC, con particolare enfasi sugli overhead in potenza e prestazioni, così come il rapporto di compressione raggiungibile nei confronti del rapporto ottimo ottenuto da un modello oracolo. In questa tesi presentiamo un'analisi qualitativa degli aspetti critici nella compressione E2E, per poter migliorare prestazioni e consumo energetico, e un'analisi quantitativa che esplora gli aspetti energetici e di latenza in termini di overhead introdotti dalla compressione. Discuteremo inoltre l'impossibilità di migliorare le prestazioni del sistema sfruttando la compressione E2E, e infine analizzeremo i benefici derivanti dall'uso di più algoritmi di compressione eseguiti in parallelo comparandoli con singoli algoritmi di compressione. I risultati mostrano che la compressione E2E analizzata può risparmiare fino al 26% di energia, mantenendo le performance praticamente inalterate. La versione parallela inoltre permette di ottenere un miglioramento in termini ti rapporto di compressione pari al 5.7%, aumentando la stabilità di compression nel tempo di un 5% in media, comparato con meccanismi di compressione singoli.
Exploring the end-to-end compression to optimize the power-performance tradeoff in NoC-based multicores
PANCOT, FABIO
2015/2016
Abstract
The continous evolution of market applications and technological devices have posed new challenges to hardware manufacturers, in order to meet ever-increasing low-power and performance requirements. The Network-on-Chip (NoC) emerged as a de-facto on-chip interconnect to meet scalability and flexibility. The literature reports several optimization methodologies to cope with such a tradeoff ranging from the router architecture changes to priority-based packet scheduling techniques and optimized NoC topologies. In this scenario, data compression techinques represents a viable solution to reduce the injected traffic thus reducing the required bandwidth, the energy profile and increasing the system-wide performance. Previous analysis on different workloads highlighted the fact that the majority of the network traffic is due to data transfers. Starting from this assumptions, we propose an analysis of the impact that end-to-end (E2E) compression scheme has on the NoC. This thesis investigates the power-performance tradeoff offered by the E2E compression in NoCs with particular emphasis on the power and performance overheads as well as the achievable compression ratio with respect to the optimal compression ratio from a golden model. We then present a qualitative analysis of the most critical aspects in E2E compression to cope with, in order to improve performance and power consumption, and a quantitative analysis that explores the energy and latency overheads introduced by such compression scheme. We will also discuss the impossibility of E2E compression to improve the system perfomance and finally analize the benefits deriving from the utilization of more compression mechanisms in parallel, compared to single-compression-algorithm schemes. Results have been collected using a 16-core architecture running a subset of the SpecCPU2006 benchmark suite. Results shows that E2E compression achieve 26% energy savings with almost the same performance of the baseline NoC. Its parallel version increase the traffic compressibility by a 5.7%, and increase its stability over time by 5% on average, compared to single-compression-algorithm schemes.File | Dimensione | Formato | |
---|---|---|---|
2016_12_Pancot.pdf
accessibile in internet per tutti
Descrizione: Testo della tesi
Dimensione
1.61 MB
Formato
Adobe PDF
|
1.61 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/132476