Advanced human-engineered systems will inevitably play a more dominant role in the future and are expected to operate with increased autonomy benefiting society as a whole. Major challenges lie ahead to ensure that such complex systems operate in a safe manner while satisfying strict performance requirements. A compelling idea is to use optimization-based methodologies such as Moving Horizon Estimation (MHE) and Model Predictive Control (MPC) for autonomous decision-making as they allow incorporating complex objectives and critical measures through a cost function and constraints on system states and inputs. A fundamental requirement for MHE and MPC is a reliable, but computationally light, model enabling online evaluation. Acquiring such a model and, especially, adapting it to temporal variations of the underlying system dynamics, surrounding environment, or part-to-part variations found in any system is a challenging endeavor. This issue is addressed by introducing parametric gray-box models: Relying on physics-based modeling facilitates the integration of function approximators such as Feed Forward Neural Networks (FFNN) or Recurrent Neural Networks (RNN) of rather limited size as the data-driven component in the gray-box models, and also benefits from increased model interpretability compared to pure black-box models. By deploying these gray-box models in an MHE scheme, a novel MHE framework for physics-informed learning is introduced. The learning is made safe through constraints consistent with physical laws seamlessly integrated into the optimization problem. The proposed MHE scheme is inherently suitable for online application and provides concurrent state estimation and model adaptation with uncertainty quantification. Thus, the proposed MHE framework can support any advanced decision-making process that relies on accurate state and parameter estimates, being autonomous or human-assisted, extending beyond just MPC. Furthermore, an offline training algorithm capitalizing on the proposed MHE scheme tailored for gray-box models is presented, which provides a viable alternative to classical training algorithms for such a class of models. The utility of the methodology is demonstrated through two numerical experiments, showing promising results for estimation, prediction, closed-loop Nonlinear MPC (NMPC), and online model adaptation. For real-time implementation of MHE and MPC, the underlying optimization algorithms are of paramount importance. In linear MHE and MPC the problem boils down to a Quadratic Program (QP) while Sequential Quadratic Programming (SQP), a cherished solution procedure for Nonlinear MHE (NMHE) and NMPC, is based on solving a series of related QPs. To meet this demand, QPALM-OCP is introduced: QPALM-OCP is a Proximal Augmented Lagrangian Method (P-ALM) tailored for the multi-stage structure arising in MHE and MPC. The algorithm relies on a semi-smooth Newton method to solve the inner ALM problem allowing multiple active set changes between iterates and benefits from warm-starting. Due to specialized low-rank matrix modifications, the iterates remain cheap while providing fast execution times. Contrary to conventional Quadratic Programming solvers (QP solvers), QPALM-OCP comes with guarantees of R-linear convergence to a stationary point of nonconvex QPs, making it an ideal candidate for SQP methods, especially when using exact second-order information. A prototype implementation, despite being non-optimized, outperforms state-of-the-art general-purpose QP solvers and is slightly faster than state-of-the-art Optimal Control Problem (OCP)-specific solvers in numerical experiments, demonstrating great potential for further development.
I sistemi complessi ingegnerizzati dall'uomo giocheranno inevitabilmente un ruolo più dominante nel futuro e si prevede che operino con una maggiore autonomia, a beneficio dell'intera società. Garantire che tali sistemi operino in modo sicuro soddisfacendo stringenti requisiti prestazionali rappresenta una sfida significativa. Un'idea promettente è quella di utilizzare metodologie basate sull'ottimizzazione numerica, come la stima a orizzonte mobile (Moving Horizon Estimation, MHE) e il controllo predittivo basato su modello (Model Predictive Control, MPC) per prendere decisioni autonome, poiché consentono di incorporare obiettivi quantitativi complessi e misure critiche attraverso una funzione di costo e vincoli sugli stati del sistema e sugli ingressi manipolati. MHE e MPC richiedono di avere un modello matematico affidabile del processo da controllare o stimare, ma sufficientemente leggero dal punto di vista computazionale, in modo da consentire l'esecuzione online. Sviluppare un tale modello e, soprattutto, adattarlo alle variazioni temporali delle dinamiche del sistema sottostante, dell'ambiente circostante o alla variabilità propria del processo produttivo è un'impresa impegnativa. Questa problematica viene affrontata introducendo modelli parametrici "gray-box": affidandosi alla modellazione basata sulla fisica, si facilita l'integrazione di approssimatori di funzioni come Reti Neurali Feed Forward (FFNN) o Reti Neurali Ricorrenti (RNN) di dimensioni piuttosto limitate come componente basata su dati nei modelli gray-box, e si beneficia anche di una maggiore interpretabilità del modello rispetto ai modelli puramente "black-box". Implementando questi modelli gray-box in uno schema MHE, viene introdotto un nuovo framework per l'apprendimento informato dalla fisica. L'apprendimento viene reso sicuro attraverso vincoli coerenti con le leggi fisiche, integrati senza soluzione di continuità nel problema di ottimizzazione. Lo schema MHE proposto è intrinsecamente adatto all'applicazione online e fornisce simultaneamente la stima degli stati e l'adattamento dei parametri del modello, con quantificazione dell'incertezza. Pertanto, il framework MHE proposto può supportare qualsiasi processo decisionale avanzato che si basi su stime accurate degli stati e dei parametri, sia autonomo che assistito dall'uomo, estendendosi oltre il solo MPC. Inoltre, viene presentato un algoritmo di addestramento offline che sfrutta lo schema MHE proposto per modelli gray-box, che rappresenta un'alternativa praticabile agli algoritmi di addestramento classici per questa classe di modelli. L'utilità della metodologia viene dimostrata attraverso due esperimenti numerici, mostrando risultati promettenti per stima, previsione, MPC non lineare ad anello chiuso (NMPC) e adattamento online del modello. Gli algoritmi di ottimizzazione numerica alla base dell'implementazione di MHE e MPC sono di fondamentale importanza per l'utilizzo in tempo reale. In MHE e MPC lineari, il problema si riduce a un problema di programmazione quadratica (QP), mentre la programmazione quadratica sequenziale (SQP), una procedura di soluzione utile per MHE non lineari (NMHE) e NMPC, si basa sulla risoluzione di una serie di QP correlati. Per soddisfare questa esigenza, viene introdotto QPALM-OCP: un metodo di "proximal augmented Lagrangian" (P-ALM) adattato per la struttura multi-stage che emerge in MHE e MPC. L'algoritmo si basa su un metodo Newton semi-smooth per risolvere il problema ALM interno consentendo più cambiamenti dell'insieme attivo di vincoli tra le iterazioni e beneficia di procedure di "warm start". Grazie a operazioni specializzate su matrici a basso rango, le iterazioni rimangono economiche offrendo tempi di esecuzione rapidi. Contrariamente ai solutori di programmazione quadratica convenzionali, QPALM-OCP offre garanzie di convergenza R-lineare a un punto stazionario di QP non convessi, rendendolo un candidato ideale per i metodi SQP, soprattutto quando si utilizzano informazioni esatte di secondo ordine. Una implementazione prototipale, nonostante non sia ottimizzata, supera i solutori QP general-purpose all'avanguardia e risulta leggermente più veloce dei solutori specifici per problemi di controllo ottimo (OCP) in esperimenti numerici, dimostrando un grande potenziale per ulteriori sviluppi.
Physics-informed online learning of gray-box models by Moving Horizon Estimation and efficient numerical methods
Løwenstein, Kristoffer Fink
2024/2025
Abstract
Advanced human-engineered systems will inevitably play a more dominant role in the future and are expected to operate with increased autonomy benefiting society as a whole. Major challenges lie ahead to ensure that such complex systems operate in a safe manner while satisfying strict performance requirements. A compelling idea is to use optimization-based methodologies such as Moving Horizon Estimation (MHE) and Model Predictive Control (MPC) for autonomous decision-making as they allow incorporating complex objectives and critical measures through a cost function and constraints on system states and inputs. A fundamental requirement for MHE and MPC is a reliable, but computationally light, model enabling online evaluation. Acquiring such a model and, especially, adapting it to temporal variations of the underlying system dynamics, surrounding environment, or part-to-part variations found in any system is a challenging endeavor. This issue is addressed by introducing parametric gray-box models: Relying on physics-based modeling facilitates the integration of function approximators such as Feed Forward Neural Networks (FFNN) or Recurrent Neural Networks (RNN) of rather limited size as the data-driven component in the gray-box models, and also benefits from increased model interpretability compared to pure black-box models. By deploying these gray-box models in an MHE scheme, a novel MHE framework for physics-informed learning is introduced. The learning is made safe through constraints consistent with physical laws seamlessly integrated into the optimization problem. The proposed MHE scheme is inherently suitable for online application and provides concurrent state estimation and model adaptation with uncertainty quantification. Thus, the proposed MHE framework can support any advanced decision-making process that relies on accurate state and parameter estimates, being autonomous or human-assisted, extending beyond just MPC. Furthermore, an offline training algorithm capitalizing on the proposed MHE scheme tailored for gray-box models is presented, which provides a viable alternative to classical training algorithms for such a class of models. The utility of the methodology is demonstrated through two numerical experiments, showing promising results for estimation, prediction, closed-loop Nonlinear MPC (NMPC), and online model adaptation. For real-time implementation of MHE and MPC, the underlying optimization algorithms are of paramount importance. In linear MHE and MPC the problem boils down to a Quadratic Program (QP) while Sequential Quadratic Programming (SQP), a cherished solution procedure for Nonlinear MHE (NMHE) and NMPC, is based on solving a series of related QPs. To meet this demand, QPALM-OCP is introduced: QPALM-OCP is a Proximal Augmented Lagrangian Method (P-ALM) tailored for the multi-stage structure arising in MHE and MPC. The algorithm relies on a semi-smooth Newton method to solve the inner ALM problem allowing multiple active set changes between iterates and benefits from warm-starting. Due to specialized low-rank matrix modifications, the iterates remain cheap while providing fast execution times. Contrary to conventional Quadratic Programming solvers (QP solvers), QPALM-OCP comes with guarantees of R-linear convergence to a stationary point of nonconvex QPs, making it an ideal candidate for SQP methods, especially when using exact second-order information. A prototype implementation, despite being non-optimized, outperforms state-of-the-art general-purpose QP solvers and is slightly faster than state-of-the-art Optimal Control Problem (OCP)-specific solvers in numerical experiments, demonstrating great potential for further development.File | Dimensione | Formato | |
---|---|---|---|
PHD_thesis_Lowenstein_final.pdf
accessibile in internet per tutti
Descrizione: PhD Thesis of Kristoffer Fink Løwenstein
Dimensione
9.07 MB
Formato
Adobe PDF
|
9.07 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/233832