Active flow control is one of the most vibrant research topics in fluid dynamics, with applications across aerodynamics, automotive engineering, and environmental sciences. Classical control strategies rely on rigorous mathematical formulations, such as adjoint-based optimization, and are well-suited to handle complex constraints. However, they often suffer from practical limitations: they require differentiability assumptions, full-state observability, and are sensitive to model perturbations. In addition, they do not scale well with the dimensionality of the control space. In recent years, Deep Reinforcement Learning (DRL) has emerged as a flexible, data-driven alternative for solving high-dimensional control problems. DRL does not require explicit knowledge of the system dynamics and can learn directly from interactions with the environment, even under partial and noisy observations. This thesis presents a DRL-based framework for active flow control and compares it to classical approaches based on the adjoint method. Two benchmark configurations are considered: the two-dimensional flow around a circular cylinder, where the objective is drag reduction, and the backward-facing step, where the goal is to minimize the size of the recirculation bubble. The fluid simulations are implemented using FEniCS, and control is applied through jets located on the domain boundary. Two DRL algorithms are tested: Proximal Policy Optimization (PPO) and Twin Delayed Deep Deterministic Policy Gradient (TD3). Results show that both algorithms are capable of learning effective and generalizable control policies, demonstrating the potential of DRL in solving PDE-constrained control problems in complex flow environments.
Il controllo attivo del flusso è oggi uno dei temi di ricerca più attivi in fluidodinamica, con applicazioni in ambiti come aerodinamica, automotive e ingegneria ambientale. I metodi di controllo classici si fondano su basi teoriche solide, ma presentano limiti pratici: spesso richiedono ipotesi di linearità, scalano male con l’aumento delle variabili di controllo, assumono conoscenza completa dello stato e sono sensibili a perturbazioni parametriche o modellistiche. Il metodo dell’aggiunto supera alcune di queste criticità: gestisce vincoli non lineari e scala bene con il numero di controlli. Tuttavia, richiede la differenziabilità del funzionale di costo e può mostrare scarsa generalizzazione. Negli ultimi anni, il Deep Reinforcement Learning (DRL) ha guadagnato attenzione come approccio flessibile e data-driven al controllo attivo, capace di apprendere politiche ottimali da interazioni con l’ambiente, anche in presenza di osservazioni parziali e rumore. Questa tesi sviluppa un framework DRL per il controllo attivo di flussi e ne valuta le prestazioni rispetto al metodo dell’aggiunto. Si analizzano due test case classici: flusso attorno a un cilindro 2D (minimizzazione della resistenza) e flusso su uno scalino (riduzione della bolla di ricircolo). La simulazione numerica è basata su FEniCS e viene accoppiata agli algoritmi PPO e TD3. I risultati mostrano che entrambi sono in grado di apprendere politiche efficaci, generalizzabili e robuste anche in dinamiche complesse.
Active flow control of unsteady flows using deep reinforcement learning
Baldini, Filippo
2024/2025
Abstract
Active flow control is one of the most vibrant research topics in fluid dynamics, with applications across aerodynamics, automotive engineering, and environmental sciences. Classical control strategies rely on rigorous mathematical formulations, such as adjoint-based optimization, and are well-suited to handle complex constraints. However, they often suffer from practical limitations: they require differentiability assumptions, full-state observability, and are sensitive to model perturbations. In addition, they do not scale well with the dimensionality of the control space. In recent years, Deep Reinforcement Learning (DRL) has emerged as a flexible, data-driven alternative for solving high-dimensional control problems. DRL does not require explicit knowledge of the system dynamics and can learn directly from interactions with the environment, even under partial and noisy observations. This thesis presents a DRL-based framework for active flow control and compares it to classical approaches based on the adjoint method. Two benchmark configurations are considered: the two-dimensional flow around a circular cylinder, where the objective is drag reduction, and the backward-facing step, where the goal is to minimize the size of the recirculation bubble. The fluid simulations are implemented using FEniCS, and control is applied through jets located on the domain boundary. Two DRL algorithms are tested: Proximal Policy Optimization (PPO) and Twin Delayed Deep Deterministic Policy Gradient (TD3). Results show that both algorithms are capable of learning effective and generalizable control policies, demonstrating the potential of DRL in solving PDE-constrained control problems in complex flow environments.File | Dimensione | Formato | |
---|---|---|---|
2025_07_Baldini_Tesi_01.pdf
accessibile in internet per tutti
Descrizione: Thesis
Dimensione
5.2 MB
Formato
Adobe PDF
|
5.2 MB | Adobe PDF | Visualizza/Apri |
2025_07_Baldini_ExecutiveSummary_02.pdf
accessibile in internet per tutti
Descrizione: Executive Summary
Dimensione
1.06 MB
Formato
Adobe PDF
|
1.06 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/240977