Nonlinear Model Predictive Control using Neural ODE Replicas of Dynamic Simulators

##plugins.themes.bootstrap3.article.main##

##plugins.themes.bootstrap3.article.sidebar##

Published Sep 4, 2023
Shumpei Kubosawa Takashi Onishi Yoshimasa Tsuruoka

Abstract

We propose simulation-based nonlinear model predictive control as a first step towards autonomous decision-making for stable operation of large complex dynamical systems such as chemical plants. The effect of abrupt external disturbances should be quickly eliminated, taking into account such complex dynamic responses, to maintain stable production. In this paper, we propose a control system to eliminate these effects. The system uses engineering models, including dynamic simulators, based on chemical engineering knowledge. Dynamic simulators are generally not differentiable with respect to actions; however, differentiable models are advantageous for fast nonlinear optimization. To take advantage of both reliable dynamic simulators and differentiable models, we introduce neural ordinary differentiable equation models and clone the behaviour of simulators on them. The cloned differentiable neural replica model is then incorporated into a gradient-based nonlinear model predictive control. Evaluation of this method in a real methanol distillation plant confirms that it can significantly remove abrupt heavy rain disturbances compared to existing methods.  

Abstract 369 | PDF Downloads 251

##plugins.themes.bootstrap3.article.details##

Keywords

dynamic simulator, model predictive control, neural ordinary differential equation, chemical process, disturbance rejection

References
Bauer, M., & Craig, I. K. (2008). Economic assessment of advanced process control – a survey and frame work. Journal of Process Control, 18(1), 2-18. doi: https://doi.org/10.1016/j.jprocont.2007.05.007

Bertsekas, D. (2019). Reinforcement learning and optimal control. Athena Scientific.

Chen, R. T., Rubanova, Y., Bettencourt, J., & Duvenaud, D. K. (2018). Neural ordinary differential equations. Advances in neural information processing systems, 31.

Gorges, D. (2017). Relations between model predictive con ̈ trol and reinforcement learning. IFAC-PapersOnLine, 50(1), 4920-4928. (20th IFAC World Congress) doi: https://doi.org/10.1016/j.ifacol.2017.08.747

Henson, M. A. (1998). Nonlinear model predictive control: current status and future directions. Computers & Chemical Engineering, 23(2), 187–202. doi: https://doi.org/10.1016/S0098-1354(98)00260-9

Jiahao, T. Z., Chee, K. Y., & Hsieh, M. A. (2023). Online dynamics learning for predictive control with an application to aerial robots. In K. Liu, D. Kulic, & J. Ichnowski (Eds.), Proceedings of the 6th conference on robot learning (Vol. 205, pp. 2251–2261). PMLR.

Johansen, T. A. (2011). Selected topics on constrained and nonlinear control textbook. In (chap. Introduction to Nonlinear Model Predictive Control and Moving Horizon Estimation). STU Bratislava – NTNU Trondheim.

Kapernick, B., & Graichen, K. (2014). The gradient based nonlinear model predictive control software GRAMPC. In 2014 European Control Conference (ECC) (pp. 1170–1175).

Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.

Klatt, K.-U., & Marquardt, W. (2009). Perspectives for process systems engineering – per sonal views from academia and industry. Com puters & Chemical Engineering, 33(3), 536-550. (Selected Papers from the 17th European Symposium on Computer Aided Process Engineering held in Bucharest, Romania, May 2007) doi: https://doi.org/10.1016/j.compchemeng.2008.09.002

Knospe, C. (2006). PID control. IEEE Control Systems Magazine, 26(1), 30–31.

Kubosawa, S., Onishi, T., & Tsuruoka, Y. (2022a). AI and simulation for soft sensors and process control. KAGAKU KOGAKU RONBUN SHU, 48(4), 141–151 (in Japanese). (The English-translated version is available on arXiv. https://arxiv.org/abs/2208.04373) doi: https://doi.org/10.1252/kakoronbunshu.48.141

Kubosawa, S., Onishi, T., & Tsuruoka, Y. (2022b). Sim-to-real transfer in reinforcement learning based, non-steady-state control for chemical plants. SICE Journal of Control, Measurement, and System Integration, 15(1), 10-23. doi: https://doi.org/10.1080/18824889.2022.2029033

Morari, M., & Lee, J. H. (1999). Model predictive control: past, present and future. Computers & Chemical Engineering, 23(4-5), 667–682.

Qin, S. J., & Badgwell, T. A. (2003). A survey of industrial model predictive control technology. Control engineering practice, 11(7), 733–764.

Soroush, M. (1997). Nonlinear state-observer design with ap plication to reactors. Chemical Engineering Science, 52(3), 387-404. doi: https://doi.org/10.1016/S0009 2509(96)00391-0
Section
Special Session Papers