This thesis presents the redesign of Inpeco's performance metrics computation for clinical laboratory automation. Starting from a Python pipeline that processes bounded data on clients demand with limited parallelism and no fault tolerance, the work replace the existing solution with a modular architecture that supports both continuous and historical processing, simplifies development and operations, and improves scalability and maintainability while preserving client facing behavior. The solution adopts Apache Flink as the processing engine and Apache Kafka as the message broker, and organizes computation as a set of jobs. The combination of Flink APIs and shared libraries developed for this work and reused across jobs addresses core stream processing concerns: event time processing, watermarks with idleness detection, partitioning by node, and a small ordering buffer to handle out of order events. The same operator graph runs in both streaming and batch, which simplifies development and operations. The architecture improves correctness, maintainability, and scalability, preserves client facing behavior, and enables a gradual migration from the legacy system. The solution was validated in batch and streaming against workload profiles derived from operational data. Latency from event generation to processing was analyzed to tune watermark lag and buffer sizes, improving late and out of order handling and overall correctness. The implementation and tests indicate that Flink and Kafka provide a solid foundation for the company's reporting across real time and historical workloads, now and going forward. To meet Inpeco's need to analyze laboratory performance, possible future improvements include broadening metric coverage, tuning latency and watermark lag as data streams evolve, introducing autoscaling guided by latency and load, strengthening automatic recovery from failures, and implementing late data reconciliation.
Questa tesi presenta la riprogettazione del calcolo delle metriche di prestazione di Inpeco per l'automazione dei laboratori clinici. Partendo da una pipeline Python che elabora dati limitati su richiesta dei clienti con parallelismo limitato e nessuna tolleranza agli errori, il lavoro sostituisce la soluzione esistente con un'architettura modulare che supporta sia l'elaborazione continua che quella storica, semplifica lo sviluppo e le operazioni e migliora la scalabilità e la manutenibilità, preservando al contempo il comportamento rivolto al cliente. La soluzione adotta Apache Flink come motore di elaborazione e Apache Kafka come broker di messaggi, e organizza il calcolo come un insieme di lavori. La combinazione delle API Flink e delle librerie condivise sviluppate per questo lavoro e riutilizzate in tutti i lavori affronta le principali problematiche dell'elaborazione in streaming: elaborazione basata sul tempo di avvenimento degli eventi, watermark con rilevamento dell'inattività, partizionamento per nodo e un piccolo buffer di ordinamento per gestire gli eventi fuori ordine. Lo stesso grafo di operatori viene eseguito sia in streaming che in batch, semplificando lo sviluppo e le operazioni. L'implementazione e i test indicano che Flink e Kafka forniscono una solida base per la reportistica dell'azienda su carichi di lavoro in tempo reale e storici, ora e in futuro. Per soddisfare l'esigenza di Inpeco di analizzare le prestazioni del laboratorio, i possibili miglioramenti futuri includono l'ampliamento della copertura delle metriche, la regolazione della latenza e del ritardo del watermark con l'evoluzione dei flussi di dati, l'introduzione dell'autoscaling guidato dalla latenza e dal carico, il rafforzamento del ripristino automatico dai guasti e l'implementazione della riconciliazione dei dati in ritardo.
From bounded to unbounded: unifying data processing with Apache Flink
Arcelaschi, Federico
2024/2025
Abstract
This thesis presents the redesign of Inpeco's performance metrics computation for clinical laboratory automation. Starting from a Python pipeline that processes bounded data on clients demand with limited parallelism and no fault tolerance, the work replace the existing solution with a modular architecture that supports both continuous and historical processing, simplifies development and operations, and improves scalability and maintainability while preserving client facing behavior. The solution adopts Apache Flink as the processing engine and Apache Kafka as the message broker, and organizes computation as a set of jobs. The combination of Flink APIs and shared libraries developed for this work and reused across jobs addresses core stream processing concerns: event time processing, watermarks with idleness detection, partitioning by node, and a small ordering buffer to handle out of order events. The same operator graph runs in both streaming and batch, which simplifies development and operations. The architecture improves correctness, maintainability, and scalability, preserves client facing behavior, and enables a gradual migration from the legacy system. The solution was validated in batch and streaming against workload profiles derived from operational data. Latency from event generation to processing was analyzed to tune watermark lag and buffer sizes, improving late and out of order handling and overall correctness. The implementation and tests indicate that Flink and Kafka provide a solid foundation for the company's reporting across real time and historical workloads, now and going forward. To meet Inpeco's need to analyze laboratory performance, possible future improvements include broadening metric coverage, tuning latency and watermark lag as data streams evolve, introducing autoscaling guided by latency and load, strengthening automatic recovery from failures, and implementing late data reconciliation.| File | Dimensione | Formato | |
|---|---|---|---|
|
2025_10_Arcelaschi.pdf
solo utenti autorizzati a partire dal 22/09/2026
Dimensione
704.15 kB
Formato
Adobe PDF
|
704.15 kB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/243547