The last decade has seen remarkable growth in the amount of data to analyze, challenging the current processing systems to keep pace with the increasingly high-computational and low-latency demand. These data must be correctly manipulated and filtered depending on the applicative fields to extract useful information. Within this scenario, pattern matching based on Regular Expressions (REs) is one of the most pervasive and helpful approaches for data analysis. Typical RE applications are packet filtering and malware analysis in computer security, genome analysis in bioinformatics, Natural Language Processing in speech recognition and synthesis, and database queries. However, REs are challenging to handle due to the intrinsical data-dependent computational pattern of their equivalent finite-state machine recognizers, severely limiting their optimization and use in computationally demanding or constrained scenarios. Additionally, dynamic scenarios such as computer security or database queries require updating REs at runtime, adding further complexity. Literature RE processing engines showcase a gap between flexibility and runtime adaptability on one side and efficient RE execution on the other. On the one hand, software libraries based on mainstream general-purpose architectures show extreme flexibility and runtime adaptability. However, they rely on high-end processors to provide attractive performance, and these solutions are generally unavailable in scenarios where processing has to be near the data, such as intrusion detection and prevention systems or in constraint devices, pushing the research in alternative directions. On the other hand, hardware-centric approaches exploit direct-logic mapping or in-memory automata representation to provide high-processing performance at the cost of limited flexibility, reduced RE features support, and high reconfiguration time of the patterns to analysis. For these reasons, other strategies consider using the REs as a language to represent automata via a sequence of instructions, thus increasing flexibility thanks to dedicated Instruction Set Architectures (ISAs) while providing efficient execution through optimized Domain-Specific Architectures (DSAs). However, these approaches are currently limited in advanced RE features support and simple REs-to-instructions translation, leaving plenty of room for improvement. In this context, this dissertation explores how to exploit hardware and software specialization through a tightly integrated approach, balancing software flexibility and optimization with hardware performance for efficient execution of REs. This dissertation starts by introducing a latency-oriented benchmarking methodology for REs, allowing a standard and replicable evaluation of current CPU and GPU RE-focused execution engines, thus providing the basic structure for the proposed solution assessment. After that, this dissertation proposes an RE-tailored Instruction Set Architecture (ISA) based on the most widely employed RE features. On top of this ISA, this dissertation proposes a speculative DSA for efficient RE processing and a full-custom compiler based on the “RE as a Domain-Specific Language” concept to translate REs into optimized binaries, eventually executed on the target microarchitecture. These three components constitute an RE-tailored, completely optimized execution platform focused on low-latency and energy-efficient RE processing. Then, this dissertation explores the impact of multiple abstractions during the REs compilation and proposes a multi-dialect compiler based on the MLIR compiler infrastructure. Afterward, since the proposed approach focuses on the single-RE latency execution, this dissertation proposes a novel methodology to merge multiple REs into a single automaton, providing high throughput and reducing the memory utilization impact while processing multiple REs in parallel. Finally, this dissertation summarizes the proposed contributions, discusses its limitations, and explores potential future directions for the domain of REs.
L'ultimo decennio ha assistito ad una notevole crescita nella quantità di dati da analizzare, sfidando gli attuali sistemi di elaborazione a tenere il passo con una domanda computazionale sempre più elevata e a bassa latenza. Questi dati devono poi essere correttamente manipolati e filtrati in base ai campi applicativi per estrarre informazioni utili. In questo scenario, il pattern matching basato sulle Espressioni Regolari (RE) è uno degli approcci più pervasivi e utili per l’analisi di dati. Alcune tra le più tipiche applicazioni delle RE ci sono il filtraggio dei pacchetti e l’analisi dei malware nella sicurezza informatica, l’analisi dei genomi nella bioinformatica, l’elaborazione dei linguaggi naturali nel riconoscimento e nella sintesi vocale e nelle query nei database. Tuttavia, le RE sono difficili da gestire a causa del modello computazionale che dipende intrinsecamente dai dati nei loro riconoscitori equivalenti di macchine a stati finiti, limitandone fortemente la loro ottimizzazione e l’uso in constesti computazionalmente impegnativi o vincolati. Inoltre, scenari dinamici come la sicurezza informatica o le query nei database richiedono l’aggiornamento delle RE in fase di esecuzione, aggiungendo ulteriore complessità. I motori di elaborazione per RE nella letteratura mostrano un divario tra la flessibilità e l’adattabilità in fase di esecuzione da un lato e l’efficiente esecuzione RE dall’altro. Da un lato, le librerie software basate sulle tradizionali architetture generiche mostrano estrema flessibilità e adattabilità in fase di esecuzione. Tuttavia, esse si affidano a processori di fascia alta per garantire prestazioni specifiche e queste soluzioni non sono generalmente disponibili in scenari in cui l’elaborazione deve essere vicina ai dati, come i sistemi di rilevamento e prevenzione delle intrusioni o in dispositivi di vincolo, spingendo la ricerca in direzioni alternative. D’altro canto, gli approcci incentrati sull’hardware sfruttano la mappatura degli automi direttamente sulla logica o la loro rappresentazione in memoria per fornire prestazioni di elaborazione elevate al costo di una flessibilità minor, di un ridotto supporto da parte degli operatori e di alti tempi di riconfigurazione dei pattern da analizzare. Per queste ragioni, altre strategie considerano l’utilizzo delle RE come linguaggio per rappresentare automi con una serie di istruzioni, aumentando così la flessibilità e fornendo allo stesso tempo un’esecuzione efficiente su architetture specifiche del dominio (DSA) ottimizzate per la loro analisi. Tuttavia, questi approcci sono attualmente limitati nel supporto avanzato degli operatori e nella semplice traduzione dalle RE alle istruzioni, lasciando ampio margine di miglioramento. In questo contesto, questa tesi esplora come sfruttare la specializzazione hardware e software attraverso un approccio strettamente integrato, bilanciando la flessibilità e l’ottimizzazione del software con le prestazioni dell’hardware per un’esecuzione efficiente delle RE. Questa tesi inizia introducendo una metodologia di benchmarking focalizzata sulla latenza delle RE, consentendo una valutazione standard e replicabile degli attuali motori di esecuzione per RE basati CPU e GPU, fornendo cosi la struttura di base per analizzare le soluzioni proposte. Successivamente, questa tesi propone un’Instruction Set Architecture (ISA) su misura per RE basata sugli operatori più ampiamente utilizzati. Sulla base dell’ISA introdotta, questa tesi propone una DSA speculativa per l’elaborazione efficiente delle RE e un compilatore specializzato completamente basato sul concetto ”RE come linguaggio specifico del dominio” per tradurre le RE in binari ottimizzati, infine eseguiti sulla microarchitettura di destinazione. Questi tre componenti costituiscono una piattaforma di esecuzione su misura per RE, completa mente ottimizzata, incentrata sulla loro elaborazione a bassa latenza ed alta efficienza energetica. Quindi, questa tesi esplora l’impatto di molteplici astrazioni durante la compilazione delle RE e propone un compilatore multi-dialetto basato su MLIR. Successivamente, poiché l’approccio proposto si concentra sulla latenza di esecuzione di una singola RE, questa tesi propone una nuova metodologia per unire più RE in un singolo automa, fornendo un throughput elevato e riducendo l’impatto sull’utilizzo della memoria durante l’elaborazione parallela di più RE. Infine, questa tesi riassume i contributi proposti, ne discute i limiti ed esplora le potenziali direzioni future per il dominio delle RE.
Exploiting hardware and software specialization for efficient regular expressions processing
Carloni, Filippo
2024/2025
Abstract
The last decade has seen remarkable growth in the amount of data to analyze, challenging the current processing systems to keep pace with the increasingly high-computational and low-latency demand. These data must be correctly manipulated and filtered depending on the applicative fields to extract useful information. Within this scenario, pattern matching based on Regular Expressions (REs) is one of the most pervasive and helpful approaches for data analysis. Typical RE applications are packet filtering and malware analysis in computer security, genome analysis in bioinformatics, Natural Language Processing in speech recognition and synthesis, and database queries. However, REs are challenging to handle due to the intrinsical data-dependent computational pattern of their equivalent finite-state machine recognizers, severely limiting their optimization and use in computationally demanding or constrained scenarios. Additionally, dynamic scenarios such as computer security or database queries require updating REs at runtime, adding further complexity. Literature RE processing engines showcase a gap between flexibility and runtime adaptability on one side and efficient RE execution on the other. On the one hand, software libraries based on mainstream general-purpose architectures show extreme flexibility and runtime adaptability. However, they rely on high-end processors to provide attractive performance, and these solutions are generally unavailable in scenarios where processing has to be near the data, such as intrusion detection and prevention systems or in constraint devices, pushing the research in alternative directions. On the other hand, hardware-centric approaches exploit direct-logic mapping or in-memory automata representation to provide high-processing performance at the cost of limited flexibility, reduced RE features support, and high reconfiguration time of the patterns to analysis. For these reasons, other strategies consider using the REs as a language to represent automata via a sequence of instructions, thus increasing flexibility thanks to dedicated Instruction Set Architectures (ISAs) while providing efficient execution through optimized Domain-Specific Architectures (DSAs). However, these approaches are currently limited in advanced RE features support and simple REs-to-instructions translation, leaving plenty of room for improvement. In this context, this dissertation explores how to exploit hardware and software specialization through a tightly integrated approach, balancing software flexibility and optimization with hardware performance for efficient execution of REs. This dissertation starts by introducing a latency-oriented benchmarking methodology for REs, allowing a standard and replicable evaluation of current CPU and GPU RE-focused execution engines, thus providing the basic structure for the proposed solution assessment. After that, this dissertation proposes an RE-tailored Instruction Set Architecture (ISA) based on the most widely employed RE features. On top of this ISA, this dissertation proposes a speculative DSA for efficient RE processing and a full-custom compiler based on the “RE as a Domain-Specific Language” concept to translate REs into optimized binaries, eventually executed on the target microarchitecture. These three components constitute an RE-tailored, completely optimized execution platform focused on low-latency and energy-efficient RE processing. Then, this dissertation explores the impact of multiple abstractions during the REs compilation and proposes a multi-dialect compiler based on the MLIR compiler infrastructure. Afterward, since the proposed approach focuses on the single-RE latency execution, this dissertation proposes a novel methodology to merge multiple REs into a single automaton, providing high throughput and reducing the memory utilization impact while processing multiple REs in parallel. Finally, this dissertation summarizes the proposed contributions, discusses its limitations, and explores potential future directions for the domain of REs.File | Dimensione | Formato | |
---|---|---|---|
2024_02_carloni.pdf
accessibile in internet per tutti a partire dal 15/02/2026
Dimensione
3.54 MB
Formato
Adobe PDF
|
3.54 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/233232