This work addresses the challenge of building widely applicable and efficient neural network models for solving Partial Differential Equations (PDEs), a core problem in modern science and engineering. Existing approaches often struggle to generalize across unstructured meshes, diverse physical parameters, and time-varying conditions, limiting their utility in real-world scenarios. To overcome these barriers, we present a transformer-based framework designed to predict both detailed solution fields and Key Performance Indicators (KPI) for a broad range of PDE problems, including steady-state, unsteady, and frequency-domain analyses. Our investigation highlights the surprising effectiveness of a simple parameter-appending strategy for incorporating global parameters—such as material properties or boundary controls—directly into the model’s input. Through extensive evaluations, we show that this method enables accurate field and KPIs predictions, even in data-scarce regimes. Additionally, we explore data augmentation and subsampling techniques to address memory constraints, demonstrating that these approaches can also enable the training of this architectures on commercial devices. Overall, the results illustrate how transformer-based neural architectures can reliably capture complex PDE phenomena without resorting to specialized domain assumptions. This work also underscores the feasibility of creating large-scale, pre-trained “foundation models” for PDEs, paving the way for faster and more versatile simulations in High-Performance Computing environments and beyond.
Questa tesi affronta la sfida di sviluppare modelli di neural network ampiamente applicabili ed efficienti per la risoluzione di PDE, un problema centrale nelle scienze e nell’ingegneria moderne. Gli approcci esistenti, infatti, faticano spesso a generalizzare su mesh non strutturate, parametri fisici eterogenei e condizioni variabili nel tempo, limitandone l’impiego in contesti reali. Per superare queste difficoltà, presentiamo un framework transformer-based progettato per prevedere sia i campi di soluzione dettagliati sia i KPI in un’ampia gamma di problemi PDE, inclusi casi stazionari, instazionari e di analisi in frequenza. La nostra ricerca mette in evidenza l’efficacia, sorprendentemente elevata, di una semplice strategia di parameter appending, in cui i parametri globali — come proprietà dei materiali o controlli ai bordi — vengono incorporati direttamente nell’input del modello. Attraverso valutazioni estensive, mostriamo come questo metodo consenta di ottenere previsioni accurate sia sui campi di soluzione sia sui KPI, anche in regimi con dati limitati. Inoltre, esploriamo tecniche di data augmentation e subsampling per affrontare i vincoli di memoria, dimostrando che queste strategie possono permettere l’addestramento di tali architetture anche su dispositivi commerciali. Nel complesso, i risultati illustrano come le architetture neural basate su transformer possano cogliere in modo affidabile fenomeni PDE complessi, senza la necessità di presupposti specifici sul dominio. Questo lavoro evidenzia inoltre la fattibilità di creare su larga scala foundation models per le PDE, aprendo la strada a simulazioni più rapide e versatili in ambienti HPC e oltre.
Bridging HPC and data-driven methods: a scalable transformer architecture for large-scale PDE simulations
Pagani, Luigi
2024/2025
Abstract
This work addresses the challenge of building widely applicable and efficient neural network models for solving Partial Differential Equations (PDEs), a core problem in modern science and engineering. Existing approaches often struggle to generalize across unstructured meshes, diverse physical parameters, and time-varying conditions, limiting their utility in real-world scenarios. To overcome these barriers, we present a transformer-based framework designed to predict both detailed solution fields and Key Performance Indicators (KPI) for a broad range of PDE problems, including steady-state, unsteady, and frequency-domain analyses. Our investigation highlights the surprising effectiveness of a simple parameter-appending strategy for incorporating global parameters—such as material properties or boundary controls—directly into the model’s input. Through extensive evaluations, we show that this method enables accurate field and KPIs predictions, even in data-scarce regimes. Additionally, we explore data augmentation and subsampling techniques to address memory constraints, demonstrating that these approaches can also enable the training of this architectures on commercial devices. Overall, the results illustrate how transformer-based neural architectures can reliably capture complex PDE phenomena without resorting to specialized domain assumptions. This work also underscores the feasibility of creating large-scale, pre-trained “foundation models” for PDEs, paving the way for faster and more versatile simulations in High-Performance Computing environments and beyond.File | Dimensione | Formato | |
---|---|---|---|
2025_04_Pagani_Tesi_01_pdf.pdf
non accessibile
Descrizione: Thesis
Dimensione
20.3 MB
Formato
Adobe PDF
|
20.3 MB | Adobe PDF | Visualizza/Apri |
2025_04_Pagani_ExecutiveSummary_02.pdf
non accessibile
Descrizione: Executive Summary
Dimensione
6.33 MB
Formato
Adobe PDF
|
6.33 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/234072