Complementary Meta-Reinforcement Learning for Fault-Adaptive Control

##plugins.themes.bootstrap3.article.main##

##plugins.themes.bootstrap3.article.sidebar##

Published Nov 3, 2020
Ibrahim Ahmed Marcos Quiñones-Grueiro Gautam Biswas

Abstract

Faults are endemic to all systems. Adaptive fault-tolerant control accepts degraded performance under faults in exchange for continued operation. In systems with abrupt faults and strict time constraints, it is imperative for control to adapt fast to system changes. We present a meta-reinforcement learning approach that quickly adapts control policy. The approach builds upon model-agnostic meta learning (MAML). The controller maintains a complement of prior policies learned under system faults. This ``library" is evaluated on a system after a new fault to initialize the new policy. This contrasts with MAML where the controller samples new policies from a distribution of similar systems at each update step to achieve the new policy. Our approach improves sample efficiency of the reinforcement learning process. We evaluate this on a model of fuel tanks under abrupt faults.

How to Cite

Ahmed, I., Quiñones-Grueiro, M., & Biswas, G. (2020). Complementary Meta-Reinforcement Learning for Fault-Adaptive Control. Annual Conference of the PHM Society, 12(1), 8. https://doi.org/10.36001/phmconf.2020.v12i1.1289
Abstract 769 | PDF Downloads 513

##plugins.themes.bootstrap3.article.details##

Keywords

reinforcement learning, meta learning, fault-tolerant control, data-driven control

Section
Technical Research Papers

Most read articles by the same author(s)

<< < 1 2