This thesis investigates how artificial intelligence can be integrated into safety architectures to protect operators working in proximity to industrial machinery and collaborative robots, in line with the human-centric vision of Industry 5.0. It argues that traditional safety strategies—based on predefined scenarios and conservative worst-case assumptions—are increasingly challenged by dynamic, reconfigurable workcells where humans and machines share space. The study adopts a pipeline perspective of AI-enabled safety, structured into recognition, tracking, trajectory prediction, and decision making. Methods are reviewed across different levels of intelligence, from deterministic rule-based logic to probabilistic models and deep learning. Prediction is framed as an input to safety reasoning rather than a safety action itself. An end-to-end view is emphasized: recognition latency constrains tracking update rates; tracking continuity supports prediction reliability; and prediction quality determines whether decision making can operate proactively instead of reactively. Reliability risks linked to data limitations, distribution shift, uncertainty, and human–AI interaction are analyzed, highlighting the socio-technical nature of AI safety. Evaluation metrics extend beyond algorithmic accuracy to system-level outcomes, including reaction time, near-miss reduction, false safety stops, availability, and operator trust. AI is positioned as a complementary enabler bounded by deterministic, certifiable safety logic and robust fallback mechanisms.
Questa tesi analizza come l’intelligenza artificiale possa essere integrata nelle architetture di sicurezza per proteggere gli operatori che lavorano in prossimità di macchinari industriali e robot collaborativi, in linea con la visione human-centric di Industry 5.0. Sostiene che le strategie di sicurezza tradizionali — basate su scenari predefiniti e su assunzioni conservative di caso peggiore — siano sempre più messe alla prova da celle di lavoro dinamiche e riconfigurabili in cui esseri umani e macchine condividono lo spazio. Lo studio adotta una prospettiva a pipeline della sicurezza abilitata dall’IA, articolata in riconoscimento, tracciamento, previsione della traiettoria e decision making. I metodi vengono analizzati a diversi livelli di intelligenza, dalla logica deterministica basata su regole ai modelli probabilistici e al deep learning. La previsione è inquadrata come un input al ragionamento di sicurezza, piuttosto che come un’azione di sicurezza in sé. Viene enfatizzata una visione end-to-end: la latenza del riconoscimento vincola la frequenza di aggiornamento del tracciamento; la continuità del tracciamento supporta l’affidabilità della previsione; e la qualità della previsione determina se il decision making possa operare in modo proattivo anziché reattivo. Vengono analizzati i rischi di affidabilità legati a limitazioni dei dati, distribution shift, incertezza e interazione uomo–IA, evidenziando la natura socio-tecnica della sicurezza basata sull’intelligenza artificiale. Le metriche di valutazione vanno oltre l’accuratezza algoritmica e includono risultati a livello di sistema, quali tempo di reazione, riduzione dei near-miss, falsi arresti di sicurezza, disponibilità e fiducia dell’operatore. L’IA viene posizionata come un abilitatore complementare, vincolato da una logica di sicurezza deterministica e certificabile e da meccanismi di fallback robusti.
AI-based safety architectures for industrial human-machine interaction
SANNA, VALENTINA
2025/2026
Abstract
This thesis investigates how artificial intelligence can be integrated into safety architectures to protect operators working in proximity to industrial machinery and collaborative robots, in line with the human-centric vision of Industry 5.0. It argues that traditional safety strategies—based on predefined scenarios and conservative worst-case assumptions—are increasingly challenged by dynamic, reconfigurable workcells where humans and machines share space. The study adopts a pipeline perspective of AI-enabled safety, structured into recognition, tracking, trajectory prediction, and decision making. Methods are reviewed across different levels of intelligence, from deterministic rule-based logic to probabilistic models and deep learning. Prediction is framed as an input to safety reasoning rather than a safety action itself. An end-to-end view is emphasized: recognition latency constrains tracking update rates; tracking continuity supports prediction reliability; and prediction quality determines whether decision making can operate proactively instead of reactively. Reliability risks linked to data limitations, distribution shift, uncertainty, and human–AI interaction are analyzed, highlighting the socio-technical nature of AI safety. Evaluation metrics extend beyond algorithmic accuracy to system-level outcomes, including reaction time, near-miss reduction, false safety stops, availability, and operator trust. AI is positioned as a complementary enabler bounded by deterministic, certifiable safety logic and robust fallback mechanisms.| File | Dimensione | Formato | |
|---|---|---|---|
|
2026_03_Sanna.pdf
solo utenti autorizzati a partire dal 21/02/2027
Descrizione: testo tesi
Dimensione
5.62 MB
Formato
Adobe PDF
|
5.62 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/251586