Numerical approximation of mathematical models and Deep learning techniques have been massively used in the last decades to solve real world problems in several areas in applied sciences and engineering. Recently, researchers have started to combine techniques from these two areas to tackle the efficient numerical approximation of physics-based models with increasing complexity. Such techniques often suffer from the impact of many sources of measurement noise, lack of data (e.g.: physical coefficients, boundary conditions) or intrinsic variability of the phenomena. There is therefore a need to devise techniques to quantify the uncertainties of these models. Approximate Bayesian Ensembling (ABE) is one of such algorithms to estimate models predictions uncertainty. In this work we investigate how ABE performs on increasingly complex physics-informed architectures, like Physics Informed Neural Networks (PINNs) and deep learning-based reduced order models for parameterized PDEs (DL-ROMs), and empirically evaluate ABE goodness. Furthermore, we exploit ABE to devise an adaptive sampling technique (ABES), testing the resulting method on a simple benchmark.
L’approssimazione numerica di modelli matematici e le tecniche di deep learning sono state massivamente impiegate nell’ultimo decennio per risolvere problemi reali in svariate aree delle scienze applicate e dell’ingegneria. Recentemente i ricercatori hanno iniziato a combinare tecniche pda queste due aree per risolvere l’approssimazione numerica efficiente di problemi fisici di crescente complessita’. Tali tecniche spesso soffrono dall’impatto di diverse origini di rumore di misurazione, dati incompleti (ad esmpio coefficienti fisici, condizioni al bordo) o intrinsica variabilita’ dei fenomeni. Sussiste pertanto una necessita’ di sviluppare tecniche per quantificare l’incertezza di questi modelli. Approximate Bayesian Ensembling (ABE) e’ una di queste tecniche per stimare l’incertezza delle predizioni dei modelli. In questo lavoro investighiamo come ABE si comporti applicata a architetture fisicamente informate sempre piu’ complesse, come le Physic-Informed Neural Networks e i modelli a ordine ridotto per PDE parametrizzate basati sul deep learning DL-ROM, e valutiamo empiricamente la bonta’ di ABE. Infine, sfruttiamo ABE per costruire una tecnica di campionamento adattivo (ABES), testando il metodo su un problema d’esempio.
Approximate Bayesian Ensembling for physics-informed deep learning architectures
Capello, Federico
2020/2021
Abstract
Numerical approximation of mathematical models and Deep learning techniques have been massively used in the last decades to solve real world problems in several areas in applied sciences and engineering. Recently, researchers have started to combine techniques from these two areas to tackle the efficient numerical approximation of physics-based models with increasing complexity. Such techniques often suffer from the impact of many sources of measurement noise, lack of data (e.g.: physical coefficients, boundary conditions) or intrinsic variability of the phenomena. There is therefore a need to devise techniques to quantify the uncertainties of these models. Approximate Bayesian Ensembling (ABE) is one of such algorithms to estimate models predictions uncertainty. In this work we investigate how ABE performs on increasingly complex physics-informed architectures, like Physics Informed Neural Networks (PINNs) and deep learning-based reduced order models for parameterized PDEs (DL-ROMs), and empirically evaluate ABE goodness. Furthermore, we exploit ABE to devise an adaptive sampling technique (ABES), testing the resulting method on a simple benchmark.File | Dimensione | Formato | |
---|---|---|---|
Approximate_Bayesian_Ensembling_for_Physics_Informed_Deep_Learning_Architectures_FC.pdf
accessibile in internet solo dagli utenti autorizzati
Descrizione: Versione aggiornata 18MAY
Dimensione
12.99 MB
Formato
Adobe PDF
|
12.99 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/189117