This thesis presents a fully parallelized, three-dimensional implementation of the Lattice Boltzmann Method (LBE) enhanced by the Onsager Regularization (OReg) scheme, applied to the simulations of three-dimensional colliding vortex rings at Reynolds numbers up to Re = 6000. While previous studies have used the OReg scheme up to one-dimensional settings, this work stands as innovative as it is the first to apply it to three-dimensional configurations, demonstrating its potential for stable and physically consistent turbulent flow simulations. The main goal of this thesis is to demonstrate the ability of the OReg scheme to overcome the well-known high Reynolds number stability limitations of the classical Bhatnagar–Gross–Krook (BGK) scheme, which typically prevent LBE methods from being applied at high Reynolds numbers. Furthermore, the parallel scalability of the LBE scheme and its suitability to high-performance computing (HPC) is assessed through strong and weak scaling studies conducted on a C++ based solver developed entirely from scratch, using the PETSc library and employing the DMDA abstraction for managing domain decomposition and data exchange. Since the method is inherently explicit and collocational, almost all operations are performed locally at each grid node and the only step requiring inter-node communication is the streaming phase. This structure leads to excellent parallel scalability and makes LBE particularly well-suited for high-performance computing. Numerical results confirm that the OReg scheme achieves accurate agreement with theoretical predictions and reference DNS data from Navier–Stokes solvers. Moreover, the algorithm remains fully explicit and avoids the computational burden of solving linear systems, resulting in high computational efficiency while preserving numerical stability. To assess the parallel performance of the implementation, scalability tests were conducted on multiple architectures, including the Galileo100 cluster, equipped with dual-socket Intel(R) Xeon(R) Gold 8260 CPUs @ 2.40GHz, providing a total of 48 physical cores (96 threads), and also a dedicated university-hosted node equipped with dual-socket Intel(R) Xeon(R) Gold 6238R CPUs @ 2.20GHz, totaling 112 threads across 56 physical cores. PETSc was compiled in 64-bit mode to bypass memory limitations associated with 32-bit indexing and allow for large-scale problems. In particular, the simulation grid used to saturate the available memory was sized 700 x 700 x 700, resulting in 9.261 billion degrees of freedom (DoFs). Scaling tests were performed up to 56 MPI processes, showing optimal performance, with the fitted Amdahl parameter indicating a serial fraction s ≈ 0.00596, corresponding to less than 0.6% of the workload. These results confirm both the parallel efficiency of the collocational LBE scheme and the effectiveness of the PETSc DMDA infrastructure in distributing memory and computation.
Questa tesi presenta un'implementazione parallelizzata e tridimensionale del Metodo Lattice Boltzmann migliorata tramite schema di Regolarizzazione di Onsager (Onsager Regularization, OReg), applicato a simulazioni di anelli vorticosi tridimensionali in collisione, a numeri di Reynolds fino a Re = 6000. Sebbene studi precedenti abbiano utilizzato lo schema OReg per studi monodimensionali, questo lavoro è il primo ad applicarlo a configurazioni tridimensionali, dimostrando così il suo potenziale per simulazioni stabili e fisicamente consistenti di flussi turbolenti. L'obiettivo principale di questa tesi è dimostrare la capacità dello schema regolarizzato con la teoria di Onsager di superare le ben note limitazioni di stabilità ad alti numeri di Reynolds dello schema classico di Bhatnagar-Gross-Krook (BGK), che tipicamente impediscono ai metodi LBE di essere applicati in regimi turbolenti. Inoltre, viene valutata la scalabilità parallela dello schema OReg e la sua idoneità al calcolo ad alte prestazioni (HPC) attraverso studi di scalabilità forte e debole condotti su un solutore C++ sviluppato interamente da zero utilizzando la libreria PETSc, impiegando l'oggetto DMDA per la gestione della decomposizione del dominio e dello scambio di dati. Poiché il metodo è intrinsecamente esplicito e collocazionale, quasi tutte le operazioni sono eseguite localmente in ciascun nodo della griglia, essendo l'unico passaggio che richiede comunicazione inter-nodo la fase di streaming. Questa struttura conduce a un'eccellente scalabilità parallela e rende il metodo LBE particolarmente adatto al calcolo ad alte prestazioni. I risultati numerici confermano che lo schema raggiunge un'accurata concordanza con le previsioni teoriche e i dati DNS di riferimento provenienti da solutori di Navier-Stokes. Inoltre, l'algoritmo rimane completamente esplicito ed evita l'onere computazionale della risoluzione di sistemi lineari, risultando in un'elevata efficienza computazionale pur preservando la stabilità numerica. Per valutare le prestazioni parallele dell'implementazione, sono stati condotti test di scalabilità su più architetture, incluso il cluster Galileo100, equipaggiato con processore Intel(R) Xeon(R) Gold 8260 CPUs @ 2.40GHz a doppio socket, per un totale di 48 core fisici (96 thread) e anche su un nodo dedicato ospitato presso il Politecnico di Milano equipaggiato con processore Intel(R) Xeon(R) Gold 6238R CPUs @ 2.20GHz a doppio socket, per un totale di 112 thread su 56 core fisici. PETSc è stato compilato a 64 bit per aggirare le limitazioni di memoria associate alla compilazione a 32 bit di memoria e permettere la risoluzione di problemi su larga scala. In particolare, la griglia di simulazione utilizzata per saturare la memoria disponibile ha dimensioni 700 x 700 x 700, risultando in 9.261 miliardi di gradi di libertà (DoFs). I test di scalabilità sono stati eseguiti fino a 56 processi MPI, mostrando prestazioni ottimali, con una frazione seriale stimata s ≈ 0.00596, corrispondente a meno dello 0.6% del carico di lavoro. Questi risultati validano sia l'efficienza parallela dello schema LBE collocazionale sia l'efficacia dell'infrastruttura PETSc DMDA nella distribuzione della memoria e del calcolo.
On Onsager-consistent regularization in the lattice Boltzmann method for accurate simulation of turbulent vortex flows
Galbiati, Davide
2024/2025
Abstract
This thesis presents a fully parallelized, three-dimensional implementation of the Lattice Boltzmann Method (LBE) enhanced by the Onsager Regularization (OReg) scheme, applied to the simulations of three-dimensional colliding vortex rings at Reynolds numbers up to Re = 6000. While previous studies have used the OReg scheme up to one-dimensional settings, this work stands as innovative as it is the first to apply it to three-dimensional configurations, demonstrating its potential for stable and physically consistent turbulent flow simulations. The main goal of this thesis is to demonstrate the ability of the OReg scheme to overcome the well-known high Reynolds number stability limitations of the classical Bhatnagar–Gross–Krook (BGK) scheme, which typically prevent LBE methods from being applied at high Reynolds numbers. Furthermore, the parallel scalability of the LBE scheme and its suitability to high-performance computing (HPC) is assessed through strong and weak scaling studies conducted on a C++ based solver developed entirely from scratch, using the PETSc library and employing the DMDA abstraction for managing domain decomposition and data exchange. Since the method is inherently explicit and collocational, almost all operations are performed locally at each grid node and the only step requiring inter-node communication is the streaming phase. This structure leads to excellent parallel scalability and makes LBE particularly well-suited for high-performance computing. Numerical results confirm that the OReg scheme achieves accurate agreement with theoretical predictions and reference DNS data from Navier–Stokes solvers. Moreover, the algorithm remains fully explicit and avoids the computational burden of solving linear systems, resulting in high computational efficiency while preserving numerical stability. To assess the parallel performance of the implementation, scalability tests were conducted on multiple architectures, including the Galileo100 cluster, equipped with dual-socket Intel(R) Xeon(R) Gold 8260 CPUs @ 2.40GHz, providing a total of 48 physical cores (96 threads), and also a dedicated university-hosted node equipped with dual-socket Intel(R) Xeon(R) Gold 6238R CPUs @ 2.20GHz, totaling 112 threads across 56 physical cores. PETSc was compiled in 64-bit mode to bypass memory limitations associated with 32-bit indexing and allow for large-scale problems. In particular, the simulation grid used to saturate the available memory was sized 700 x 700 x 700, resulting in 9.261 billion degrees of freedom (DoFs). Scaling tests were performed up to 56 MPI processes, showing optimal performance, with the fitted Amdahl parameter indicating a serial fraction s ≈ 0.00596, corresponding to less than 0.6% of the workload. These results confirm both the parallel efficiency of the collocational LBE scheme and the effectiveness of the PETSc DMDA infrastructure in distributing memory and computation.| File | Dimensione | Formato | |
|---|---|---|---|
|
2025_07_Galbiati_Tesi_01.pdf
solo utenti autorizzati a partire dal 02/07/2026
Descrizione: Tesi LBE - Onsager Regularized
Dimensione
66.3 MB
Formato
Adobe PDF
|
66.3 MB | Adobe PDF | Visualizza/Apri |
|
2025_07_Galbiati_Executive_Summary_02.pdf
solo utenti autorizzati a partire dal 02/07/2026
Descrizione: Executive Summary LBE - Onsager Regularized
Dimensione
14.82 MB
Formato
Adobe PDF
|
14.82 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/240560