Analysis of the Deployment Strategies of Reinforcement Learning Controllers for Complex Dynamic Systems

##plugins.themes.bootstrap3.article.main##

##plugins.themes.bootstrap3.article.sidebar##

Published Nov 24, 2021
Ibrahim Ahmed Marcos Quinones Grueiro Gautam Biswas

Abstract

This paper benchmarks several strategies for deploying reinforcement learning (RL)-based controllers on heterogeneous hybrid systems. Sample inefficiency is often a significant cost for RL controllers because we need sufficient data to train them, and the controllers may take time to converge to an acceptable control policy. This can be doubly costly if system health is degrading, or if the network of such systems in turn cannot afford a gradually improving controller in its constituents. Learning speed improvement can be achieved via transfer learning across controllers trained on different tasks: simulations, data-driven models, or separate instances of similar systems. This paper discusses near- and far- transfers across tasks of varying similarities. These approaches are applied on a test-bed of models of cooling towers operating on office and residential buildings on a university campus.

How to Cite

Ahmed, I., Quinones Grueiro, M., & Biswas, G. (2021). Analysis of the Deployment Strategies of Reinforcement Learning Controllers for Complex Dynamic Systems. Annual Conference of the PHM Society, 13(1). https://doi.org/10.36001/phmconf.2021.v13i1.3020
Abstract 610 | PDF Downloads 424

##plugins.themes.bootstrap3.article.details##

Keywords

reinforcement learning, data driven approaches, smart cities, Adaptive Control & Fault Accommodation

Section
Technical Research Papers

Most read articles by the same author(s)

<< < 1 2