Complementary Meta-Reinforcement Learning for Fault-Adaptive Control



Published Nov 3, 2020
Ibrahim Ahmed Marcos Quiñones-Grueiro Gautam Biswas


Faults are endemic to all systems. Adaptive fault-tolerant control accepts degraded performance under faults in exchange for continued operation. In systems with abrupt faults and strict time constraints, it is imperative for control to adapt fast to system changes. We present a meta-reinforcement learning approach that quickly adapts control policy. The approach builds upon model-agnostic meta learning (MAML). The controller maintains a complement of prior policies learned under system faults. This ``library" is evaluated on a system after a new fault to initialize the new policy. This contrasts with MAML where the controller samples new policies from a distribution of similar systems at each update step to achieve the new policy. Our approach improves sample efficiency of the reinforcement learning process. We evaluate this on a model of fuel tanks under abrupt faults.

How to Cite

Ahmed, I., Quiñones-Grueiro, M., & Biswas, G. (2020). Complementary Meta-Reinforcement Learning for Fault-Adaptive Control. Annual Conference of the PHM Society, 12(1), 8.
Abstract 67 | PDF Downloads 48



reinforcement learning, meta learning, fault-tolerant control, data-driven control

Technical Papers