The use of artificial intelligence (AI) in risk monitoring offers a promising solution for predicting critical parameters. However, safety verification and validation of AI systems remains a crucial challenge for online risk monitoring in nuclear power plants. Particularly in nuclear applications, while AI models can provide high accuracy in predicting critical safety parameters, their black box nature poses significant regulatory and safety assurance challenges. This creates a pressing need for methodologies that can both leverage predictive capabilities of AI models while maintaining the verification and validation requirements of the nuclear industry. Accuracy is fundamental as prediction errors could lead to catastrophic consequences, while the trustability of these models is equally crucial as it enables domain experts to validate decisions, comply with regulations, and intervene promptly when necessary . This creates a tradeoff between them, as they usually present competing objectives. Grey box (GB) models emerge as an intermediate solution by combining physical knowledge with data-driven components, offering a balanced approach to this challenge. While physics-based white box (WB) models provide transparency through known physical laws, they may lack accuracy in capturing complex dynamics system behaviors and be computationally demanding. Conversely, data-driven black box (BB) models can achieve high accuracy but lack the trustability required in safety-critical contexts.. Grey box (GB) models emerge as a promising solution by combining physical knowledge (WB) with data-driven components (BB), although the challenge of trustability of the BB component requires validation methods. This paper proposes a methodology that combines physics-based knowledge with data-driven techniques through an explainable grey box (GB) model which embeds the use , coupled with Shapley Additive exPlanations (SHAP) for verification of the reasoning process of the model. The methodology enables assessment assessing of when the predictions of the GB model can be trusted based on the relevance of the physical input in the prediction, providing a framework for verifying the use of AI in nuclear safety applications. To demonstrate the effectiveness of our method, we apply it to an artificial case study which resembles possible real applications., The result proposed explainable GB model shows superior accuracy compared to both standalone WB and BB models, achieving a 9% improvement in Root Mean Square Error (RMSE) while increasing the proportion of trustworthy predictions from 82% to 96%. The proposed method successfully identifies regions where the BB component model appropriately incorporates physical knowledge, providing a basis for validation of the prediction. The methodology established through this artificial case study lays in the groundwork for fFuture application to a real-word dual fluid reactor system, where it will be used for online monitoring during loss-of-coolant accident scenarios (LOCA). This work represents an innovative step towards enabling the safe deployment of AI systems in nuclear plant operations, introducing a methodology that aims to address regulatory requirements for system verification and validation.
La verifica e la validazione della sicurezza dei sistemi di intelligenza artificiale (IA) rimane una sfida cruciale per il monitoraggio dei rischi online nelle centrali nucleari. In particolare nelle applicazioni nucleari, sebbene i modelli di intelligenza artificiale possano fornire un'elevata accuratezza nella previsione dei parametri critici di sicurezza, la loro natura black box pone notevoli problemi normativi e di garanzia della sicurezza. Ciò crea un'urgente necessità di metodologie in grado di sfruttare le capacità predittive dei modelli di IA, pur mantenendo i rigorosi requisiti di verifica e convalida dell'industria nucleare. L'accuratezza è fondamentale, poiché gli errori di previsione potrebbero portare a conseguenze catastrofiche, mentre l'interpretabilità di questi modelli è altrettanto cruciale, poiché consente agli esperti del settore di convalidare le decisioni, rispettare le normative e intervenire tempestivamente quando necessario. Questo crea un compromesso tra le due cose, poiché di solito presentano obiettivi in competizione tra loro. I modelli grey box (GB) emergono come soluzione intermedia, combinando la conoscenza fisica con modelli basati sui dati, offrendo un approccio equilibrato a questa sfida. Se da un lato i modelli white box (WB) basati sulla fisica forniscono trasparenza grazie alle leggi fisiche conosciute, dall'altro possono mancare di accuratezza nel catturare i comportamenti di sistemi dinamici complessi ed essere computazionalmente impegnativi. Al contrario, i modelli black box (BB) basati sui dati possono raggiungere un'elevata accuratezza, ma mancano dell'interpretabilità richiesta in contesti critici per la sicurezza.Il presente lavoro propone una metodologia innovativa che combina la conoscenza basata sulla fisica con tecniche data-driven attraverso un modello di grey box (GB) spiegabile, accoppiato con Shapley Additive exPlanations (SHAP) per la verifica del processo di ragionamento del modello. La metodologia consente di valutare quando le previsioni del modello GB possono essere attendibili in base al loro allineamento con i principi fisici noti, fornendo un quadro per la verifica dell'uso dell'IA nelle applicazioni di sicurezza nucleare. Per dimostrare l'efficacia del nostro approccio, lo applichiamo a un caso di studio artificiale. I risultati mostrano un'accuratezza superiore rispetto ai modelli WB e BB autonomi, ottenendo un miglioramento del 9% nell'RMSE e aumentando la percentuale di previsioni affidabili dall'82% al 96%. Il metodo proposto identifica con successo le regioni in cui la componente BB incorpora in modo appropriato la conoscenza fisica, fornendo una base per la convalida della previsione. La metodologia stabilita attraverso questo caso di studio artificiale pone le basi per una futura applicazione a un sistema dual fluid reactor , dove sarà utilizzata per il monitoraggio online durante gli scenari di incidente con perdita di refrigerante (LOCA). Questo lavoro rappresenta un passo significativo per consentire l'impiego sicuro dei sistemi di intelligenza artificiale nelle operazioni degli impianti nucleari, soddisfacendo al contempo i requisiti normativi per la verifica e la validazione del sistema.
Monitoring the risk of nuclear system by grey box models
Quadri, Fosco Eugenio
2023/2024
Abstract
The use of artificial intelligence (AI) in risk monitoring offers a promising solution for predicting critical parameters. However, safety verification and validation of AI systems remains a crucial challenge for online risk monitoring in nuclear power plants. Particularly in nuclear applications, while AI models can provide high accuracy in predicting critical safety parameters, their black box nature poses significant regulatory and safety assurance challenges. This creates a pressing need for methodologies that can both leverage predictive capabilities of AI models while maintaining the verification and validation requirements of the nuclear industry. Accuracy is fundamental as prediction errors could lead to catastrophic consequences, while the trustability of these models is equally crucial as it enables domain experts to validate decisions, comply with regulations, and intervene promptly when necessary . This creates a tradeoff between them, as they usually present competing objectives. Grey box (GB) models emerge as an intermediate solution by combining physical knowledge with data-driven components, offering a balanced approach to this challenge. While physics-based white box (WB) models provide transparency through known physical laws, they may lack accuracy in capturing complex dynamics system behaviors and be computationally demanding. Conversely, data-driven black box (BB) models can achieve high accuracy but lack the trustability required in safety-critical contexts.. Grey box (GB) models emerge as a promising solution by combining physical knowledge (WB) with data-driven components (BB), although the challenge of trustability of the BB component requires validation methods. This paper proposes a methodology that combines physics-based knowledge with data-driven techniques through an explainable grey box (GB) model which embeds the use , coupled with Shapley Additive exPlanations (SHAP) for verification of the reasoning process of the model. The methodology enables assessment assessing of when the predictions of the GB model can be trusted based on the relevance of the physical input in the prediction, providing a framework for verifying the use of AI in nuclear safety applications. To demonstrate the effectiveness of our method, we apply it to an artificial case study which resembles possible real applications., The result proposed explainable GB model shows superior accuracy compared to both standalone WB and BB models, achieving a 9% improvement in Root Mean Square Error (RMSE) while increasing the proportion of trustworthy predictions from 82% to 96%. The proposed method successfully identifies regions where the BB component model appropriately incorporates physical knowledge, providing a basis for validation of the prediction. The methodology established through this artificial case study lays in the groundwork for fFuture application to a real-word dual fluid reactor system, where it will be used for online monitoring during loss-of-coolant accident scenarios (LOCA). This work represents an innovative step towards enabling the safe deployment of AI systems in nuclear plant operations, introducing a methodology that aims to address regulatory requirements for system verification and validation.| File | Dimensione | Formato | |
|---|---|---|---|
|
Monitoring the Risk of Nuclear System by Grey Box Models .pdf
accessibile in internet per tutti
Dimensione
1.96 MB
Formato
Adobe PDF
|
1.96 MB | Adobe PDF | Visualizza/Apri |
|
2024_12_Quadri.pdf
accessibile in internet per tutti
Dimensione
1.78 MB
Formato
Adobe PDF
|
1.78 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/230133