A Case-study Led Investigation of Explainable AI (XAI) to Support Deployment of Prognostics in the industry
##plugins.themes.bootstrap3.article.main##
##plugins.themes.bootstrap3.article.sidebar##
Abstract
Civil nuclear generation plant must maximise it’s operational uptime in order to maintain it’s viability. With aging plant and heavily regulated operating constraints, monitoring is commonplace, but identifying health indicators to pre-empt disruptive faults is challenging owing to the volumes of data involved. Machine learning (ML) models are increasingly deployed in prognostics and health management (PHM) systems in various industrial applications, however, many of these are black box models that provide good performance but little or no insight into how predictions are reached. In nuclear generation, there is significant regulatory oversight and therefore a necessity to explain decisions based on outputs from predictive models. These explanations can then enable stakeholders to trust these outputs, satisfy regulatory bodies and subsequently make more effective operational decisions. How ML model outputs convey explanations to stakeholders is important, so these explanations must be in human (and technical domain related) understandable terms. Consequently, stakeholders can rapidly interpret, then trust predictions better, and will be able to act on them more effectively. The main contributions of this paper are: 1. introduce XAI into the PHM of industrial assets and provide a novel set of algorithms that translate the explanations produced by SHAP to text-based human-interpretable explanations; and 2. consider the context of these explanations as intended for application to prognostics of critical assets in industrial applications. The use of XAI will not only help in understanding how these ML models work, but also describe the most important features contributing to predicted degradation of the nuclear generation asset.
How to Cite
##plugins.themes.bootstrap3.article.details##
Prognostics, Machine learning, Explainable AI, assets
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
The Prognostic and Health Management Society advocates open-access to scientific data and uses a Creative Commons license for publishing and distributing any papers. A Creative Commons license does not relinquish the author’s copyright; rather it allows them to share some of their rights with any member of the public under certain conditions whilst enjoying full legal protection. By submitting an article to the International Conference of the Prognostics and Health Management Society, the authors agree to be bound by the associated terms and conditions including the following:
As the author, you retain the copyright to your Work. By submitting your Work, you are granting anybody the right to copy, distribute and transmit your Work and to adapt your Work with proper attribution under the terms of the Creative Commons Attribution 3.0 United States license. You assign rights to the Prognostics and Health Management Society to publish and disseminate your Work through electronic and print media if it is accepted for publication. A license note citing the Creative Commons Attribution 3.0 United States License as shown below needs to be placed in the footnote on the first page of the article.
First Author et al. This is an open-access article distributed under the terms of the Creative Commons Attribution 3.0 United States License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.