Using Explainable Artificial Intelligence to Interpret Remaining Useful Life Estimation with Gated Recurrent Unit

##plugins.themes.bootstrap3.article.main##

##plugins.themes.bootstrap3.article.sidebar##

Published Nov 5, 2024
Marcia L. Baptista Madhav Mishra Elsa Henriques Helmut Prendinger

Abstract

In engineering, prognostics can be defined as the estimation of the remaining useful life of a system given current and past health conditions. This field has drawn attention from research, industry, and government as this kind of technology can help improve efficiency and lower the costs of maintenance in a variety of technical applications. An approach to prognostics that has gained increasing attention is the use of data-driven methods. These methods typically use pattern recognition and machine learning to estimate the residual life of equipment based on historical data. Despite their promising results, a major disadvantage is that it is difficult to interpret this kind of methodologies, that is, to understand why a certain prediction of remaining useful life was made at a certain point in time. Nevertheless, the interpretability of these models could facilitate the use of data-driven prognostics in different domains such as aeronautics, manufacturing, and energy, areas where certification is critical. To help address this issue, we use Local Interpretable Model-agnostic Explanations (LIME) from the field of eXplainable Artificial Intelligence (XAI) to analyze the prognostics of a Gated Recurrent Unit (GRU) on the C-MAPSS data. We select the GRU as this is a deep learning model that a) has an explicit temporal dimension and b) has shown promising results in the field of prognostics and c) is of simplified nature compared to other recurrent networks. Our results suggest that it is possible to infer the feature importance for the GRU both globally (for the entire model) and locally (for a given RUL prediction) with LIME.

How to Cite

Baptista, M. L. ., Mishra, M. ., Henriques, E., & Prendinger, H. (2024). Using Explainable Artificial Intelligence to Interpret Remaining Useful Life Estimation with Gated Recurrent Unit. Annual Conference of the PHM Society, 16(1). https://doi.org/10.36001/phmconf.2024.v16i1.4124
Abstract 111 | PDF Downloads 98

##plugins.themes.bootstrap3.article.details##

Keywords

Explainable AI, Prognostics, Lime, Gated Recurrent Unit

References
Agarwal, R., Melnick, L., Frosst, N., Zhang, X., Lengerich, B., Caruana, R., & Hinton, G. E. (2021). Neural additive models: Interpretable machine learning with neural nets. Advances in neural information processing systems, 34, 4699–4711.

An, D., Kim, N. H., & Choi, J.-H. (2015). Practical options for selecting data-driven or physics-based prognostics algorithms with reviews. Reliability Engineering & System Safety, 133, 223–236.

Antamis, T., Drosou, A., Vafeiadis, T., Nizamis, A., Ioannidis, D., & Tzovaras, D. (2024). Interpretability of deep neural networks: A review of methods, classification and hardware. Neurocomputing, 128204.

Autonomio Talos [Computer Software]. (2019).

Balasubramani, P., Shi, Q., & DeLaurentis, D. (2024). Explainable machine learning for turbojet engine prognostic health management. In Aiaa scitech 2024 forum (p. 0762).

Baptista, M. L., Goebel, K., & Henriques, E. M. (2022). Relation between prognostics predictor evaluation metrics and local interpretability shap values. Artificial Intelligence, 306, 103667.

Bergstra, J., & Bengio, Y. (2012). Random search for hyperparameter optimization. Journal of Machine Learning Research, 13(Feb), 281–305.

Bonissone, P. P., & Goebel, K. (1999). Soft computing techniques for diagnostics and prognostics. In Working notes for the 1999 aaai symposium for use of ai in equipment maintenance.

Breiman, L. (2001). Random forests. Machine learning, 45(1), 5–32. BulLın, M., ˇSmLıdl, L., & ˇSvec, J. (2019). On using stateful lstm networks for key-phrase detection. In International conference on text, speech, and dialogue (pp. 287–298).

Celaya, J. R., Saxena, A., & Goebel, K. (2012). Uncertainty representation and interpretation in model based prognostics algorithms based on kalman filter estimation (Tech. Rep.). NASA CA AMES RESEARCH.

Cho, K., Van MerriNenboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning phrase representations using RNN encoderdecoder for statistical machine translation. arXiv
preprint arXiv:1406.1078.

Chung, J., Gulcehre, C., Cho, K., & Bengio, Y. (2014). Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555.

Daigle, M. (2014). Model based prognostics. PHM Society.

Dogga, B., Sathyan, A., & Cohen, K. (2024). Explainable ai based remaining useful life estimation of aircraft engines.

In Aiaa scitech 2024 forum (p. 2530). Dong, D., Li, X.-Y., & Sun, F.-Q. (2017). Life prediction of jet engines based on LSTM-recurrent neural networks. In International conference of prognostics and system health management (pp. 1–6).

Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.

Erasmus, A., Brunet, T. D., & Fisher, E. (2021). What is interpretability? Philosophy & Technology, 34(4), 833– 862.

Fen, H., Song, K., Udell, M., Sun, Y., Zhang, Y., et al. (2019). Why should you trust my interpretation? understanding uncertainty in lime predictions. arXiv preprint arXiv:1904.12991.

Gawde, S., Patil, S., Kumar, S., Kamat, P., Kotecha, K., & Alfarhood, S. (2024). Explainable predictive maintenance of rotating machines using lime, shap, pdp, ice. IEEE Access, 12, 29345–29361.

Geurts, P., Ernst, D., & Wehenkel, L. (2006). Extremely randomized trees. Machine learning, 63(1), 3–42.

Geurts, P., & Louppe, G. (2011). Learning to rank with ex tremely randomized trees. In Jmlr: Workshop and conference proceedings (Vol. 14, pp. 49–61).

Gunning, D., & Aha, D. (2019). Darpa’s explainable artificial intelligence (xai) program. AI magazine, 40(2), 44–58.

Hasib, A. A., Rahman, A., Khabir, M., & Shawon, M. T. R. (2023). An interpretable systematic review of machine learning models for predictive maintenance of aircraft engine. arXiv preprint arXiv:2309.13310.

Heimes, F. O. (2008). Recurrent neural networks for remaining useful life estimation. In International conference on prognostics and health management (pp. 1–6).

Hinchi, A. Z., & Tkiouat, M. (2018). A deep Long-Short- Term-Memory neural network for lithium-ion battery prognostics. In International conference on industrial engineering and operations management (pp. 2162– 2168).

Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9(8), 1735–1780.

Hu, X., Rudin, C., & Seltzer, M. (2019). Optimal sparse decision trees. In Advances in neural information processing systems (pp. 7265–7273).

Jaeger, H. (2001). The “echo state” approach to analysing and training recurrent neural networks– With an erratum note (Tech. Rep.). German National Research Center for Information Technology GMD Technical Report.

Jaeger, H. (2002). Tutorial on training recurrent neural networks, covering bppt, rtrl, ekf and the “echo state network” approach (Vol. 5). GMD-Forschungszentrum Informationstechnik Bonn.

Jardine, A. K., Lin, D., & Banjevic, D. (2006). A review on machinery diagnostics and prognostics implementing condition-based maintenance. Mechanical Systems and Signal Processing, 20(7), 1483–1510.

Ji, Z., Zhang, L., & Yan, W. (2024). An interpretable fault prediction method based on machine learning and knowledge graphs. In International conference on intelligent computing (pp. 30–41).

Kazemitabar, J., Amini, A., Bloniarz, A., & Talwalkar, A. S. (2017). Variable importance using decision trees. In Advances in neural information processing systems (pp. 426–435).

Kim, B. (2015). Interactive and interpretable machine learning models for human machine collaboration (Unpublished doctoral dissertation). Massachusetts Institute of Technology.

Kulkarni, C. S., Daigle, M. J., Gorospe, G., & Goebel, K. (2018). Experimental validation of model-based prognostics for pneumatic valves.

Kundu, R. K., & Hoque, K. A. (2023). Explainable predictive maintenance is not enough: quantifying trust in remaining useful life estimation. In Annual conference of the phm society (Vol. 15).

Lakkaraju, H., Bach, S. H., & Leskovec, J. (2016). Interpretable decision sets: A joint framework for description and prediction. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1675–1684).

Lee, J., Holgado, M., Kao, H.-A., & Macchi, M. (2014). New thinking paradigm for maintenance innovation design. IFAC Proceedings Volumes, 47(3), 7104–7109.

Lee, K., Kim, J.-K., Kim, J., Hur, K., & Kim, H. (2018a). CNN and GRU combination scheme for bearing anomaly detection in rotating machinery health monitoring. In 1st international conference on knowledge innovation and invention (ickii) (pp. 102–105).

Lee, K., Kim, J.-K., Kim, J., Hur, K., & Kim, H. (2018b). Stacked convolutional bidirectional LSTM recurrent neural network for bearing anomaly detection in rotating machinery diagnostics. In 1st international conference on knowledge innovation and invention (ickii) (pp. 98–101).

Lipton, Z. C. (2018). The mythos of model interpretability. Queue, 16(3), 31–57. Litt, J. S., Frederick, D. K., & DeCastro, J. (2008). Simulating operation of a large turbofan engine (Tech. Rep.). NASA.

Long, B., Li, X., Gao, X., & Liu, Z. (2019). Prognostics comparison of lithium-ion battery based on the shallow and deep neural networks model. Energies, 12(17), 1– 12.

Lou, Y., Caruana, R., & Gehrke, J. (2012). Intelligible models for classification and regression. In Proceedings of the 18th acm sigkdd international conference on knowledge discovery and data mining (pp. 150–158).

Ma, R., Yang, T., Breaz, E., Li, Z., Briois, P., & Gao, F. (2018). Data-driven proton exchange membrane fuel cell degradation predication through deep learning method. Applied Energy, 231, 102–115.

Melis, D. A., & Jaakkola, T. (2018). Towards robust interpretability with self-explaining neural networks. In Advances in neural information processing systems (pp. 7775–7784).

Morgan, N., & Bourlard, H. (1990). Generalization and parameter estimation in feedforward nets: Some experiments. In Advances in neural information processing systems (pp. 630–637).

Moubray, J. (2001). Reliability-centered maintenance. Industrial Press Inc. NASA, RCM. (2008). Guide reliability: Centered maintenance guide for facilities and collateral equipment. Aeronautics and SA NASA, Eds.

Nguyen, D., Kefalas, M., Yang, K., Apostolidis, A., Olhofer, M., Limmer, S., & BÅNack, T. (2019). A review: Prognostics and health management in automotive and aerospace. International Journal of Prognostics and Health Management, 10(2), 35.

Quinlan, J. R. (1986). Induction of decision trees. Machine learning, 1(1), 81–106. Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “why should i trust you?” explaining the predictions of any classifier. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining (pp. 1135–1144).

Saxena, A., Celaya, J., Balaban, E., Goebel, K., Saha, B., Saha, S., & Schwabacher, M. (2008). Metrics for evaluating performance of prognostic techniques. In International conference on prognostics and health management (pp. 1–17).

Saxena, A., Goebel, K., Simon, D., & Eklund, N. (2008). Damage propagation modeling for aircraft engine runto- failure simulation. In International conference on prognostics and health management (pp. 1–9).

Schoenborn, J. M., & Althoff, K.-D. (2019). Recent trends in xai: A broad overview on current approaches, methodologies and interactions. In Case-based reasoning for the explanation of intelligent systems (xcbr) workshop.

Schwabacher, M., & Goebel, K. (2007). A survey of artificial intelligence for prognostics. In Aaai fall symposium: Artificial intelligence for prognostics (pp. 108–115).

Serradilla, O., Zugasti, E., Cernuda, C., Aranburu, A., de Okariz, J. R., & Zurutuza, U. (2020). Interpreting remaining useful life estimations combining explainable artificial intelligence and domain knowledge in industrial machinery. In 2020 ieee international conference on fuzzy systems (fuzz-ieee) (pp. 1–8).

Shankaranarayana, S. M., & Runje, D. (2019). Alime: Autoencoder based approach for local interpretability. In International conference on intelligent data engineering and automated learning (pp. 454–463).

Song, Y., Li, L., Peng, Y., & Liu, D. (2018). Lithium-ion battery remaining useful life prediction based on grurnn. In 2018 12th international conference on reliability, maintainability, and safety (icrms) (pp. 317–322).

Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1), 1929–1958.

Tse, P., & Atherton, D. (1999). Prediction of machine deterioration using vibration based fault trends and recurrent neural networks. Journal of Vibration and Acoustics, 121(3), 355–362.

TurbLe, H., Bjelogrlic, M., Lovis, C., & Mengaldo, G. (2023). Evaluation of post-hoc interpretability methods in time-series classification. Nature Machine Intelligence, 5(3), 250–260.

Tzeng, F.-Y., & Ma, K.-L. (2005). Opening the black boxdata driven visualization of neural networks. IEEE.

Ucar, A., Karakose, M., & Kırımc.a, N. (2024). Artificial intelligence for predictive maintenance applications: key components, trustworthiness, and future trends. Applied Sciences, 14(2), 898.

Yuan, M., Wu, Y., & Lin, L. (2016). Fault diagnosis and remaining useful life estimation of aero engine using lstm neural network. In International conference on aircraft utility systems (aus) (pp. 135–140).

Zafar, M. R., & Khan, N. M. (2019). DLIME: a deterministic local interpretable model-agnostic explanations approach for computer-aided diagnosis systems. arXiv preprint arXiv:1906.10263.

Zeldam, S. (2018). Automated failure diagnosis in aviation maintenance using explainable artificial intelligence (xai) (Unpublished master’s thesis). University of Twente.

Zhang, W., Jin, F., Zhang, G., Zhao, B., & Hou, Y. (2019). Aero-engine remaining useful life estimation based on 1-dimensional FCN-LSTM neural networks. In Chinese control conference (ccc) (pp. 4913–4918).

Zhang, Y., Xiong, R., He, H., & Liu, Z. (2017). A lstm-rnn method for the lithuim-ion battery remaining useful life prediction. In International conference on prognostics and system health management (pp. 1–4).
Section
Technical Research Papers