To Trust or Not: Towards Efficient Uncertainty Quantification for Stochastic Shapley Explanations

##plugins.themes.bootstrap3.article.main##

##plugins.themes.bootstrap3.article.sidebar##

Published Sep 4, 2023
Joseph Cohen Eunshin Byon Xun Huan

Abstract

Recently, explainable AI (XAI) techniques have gained traction in the field of prognostics and health management (PHM) to enhance the credibility and trustworthiness of data-driven nonlinear models. Post-hoc model explanations have been popularized via algorithms such as SHapley Additive exPlanations (SHAP), but remain impractical for real-time prognostics applications due to the curse of dimensionality. As an alternative to deterministic approaches, stochastically sampled Shapley-based approximations have computational benefits for explaining model predictions. This paper will introduce and examine a new concept of explanation uncertainty through the lens of uncertainty quantification of stochastic Shapley attribution estimates. The proposed algorithm for estimating Shapley explanation uncertainty is efficiently applied for the 2021 PHM Data Challenge problem. The uncertainty in the derived explanation for a single prediction is also illustrated through personalized prediction recipe plots, improving post-hoc model visualization. Finally, important practical considerations for the implementation of Shapley-based XAI for industrial prognostics are provided.  

Abstract 243 | PDF Downloads 171

##plugins.themes.bootstrap3.article.details##

Keywords

Explainable AI, Prognostics and health management, Uncertainty quantification

References
Ahmed, I., Jeon, G., & Piccialli, F. (2022). From Artificial Intelligence to Explainable Artificial Intelligence in Industry 4.0: A Survey on What, How, and Where. IEEE Transactions on Industrial Informatics, 18(8), 50315042. doi: 10.1109/TII.2022.3146552

Bai, C., Dallasega, P., Orzes, G., & Sarkis, J. (2020). Industry 4.0 technologies assessment: A sustainability perspective. International Journal of Production Economics, 229, 107776. doi: 10.1016/j.ijpe.2020.107776 Chao, M. A., Kulkarni, C., Goebel, K., & Fink, O. (2021a). Aircraft Engine Run-To-Failure Dataset Under Real Flight Conditions. NASA Ames Prognostics Data Repository.

Chao, M. A., Kulkarni, C., Goebel, K., & Fink, O. (2021b). PHM Society Data Challenge 2021. PHM Society, 1-6.

Chen, H., Covert, I. C., Lundberg, S. M., & Lee, S.-I. (2022). Algorithms to estimate Shapley value feature attributions. Preprint, arxiv:2207.07605.

Chen, J., Song, L., Wainwright, M. J., & Jordan, M. I. (2018). L-Shapley and C-Shapley: Efficient Model Interpretation for Structured Data. Preprint, arxiv:1808.02610.

Cohen, J., Huan, X., & Ni, J. (2023). Shapley-based Explainable AI for Clustering Applications in Fault Diagnosis and Prognosis. Preprint, arxiv:2303.14581.

Covert, I., & Lee, S.-I. (2021). Improving KernelSHAP: Practical Shapley Value Estimation Using Linear Regression. In A. Banerjee & K. Fukumizu (Eds.), Proceedings of The 24th International Conference on Artificial Intelligence and Statistics (Vol. 130, pp. 3457– 3465).

Innes, M. (2018, 5). Flux: Elegant machine learning with Julia. Journal of Open Source Software, 3. doi: 10 .21105/JOSS.00602

Lai, X., Shui, H., Ding, D., & Ni, J. (2021). Data-driven dynamic bottleneck detection in complex manufacturing systems. Journal of Manufacturing Systems, 60, 662-675. doi: 10.1016/j.jmsy.2021.07.016

Lee, J., Davari, H., Singh, J., & Pandhare, V. (2018). Industrial Artificial Intelligence for industry 4.0-based manufacturing systems. Manufacturing Letters, 18, 20-23. doi: 10.1016/j.mfglet.2018.09.002

Lundberg, S. M., & Lee, S.-I. (2017). A Unified Approach to Interpreting Model Predictions. In I. Guyon et al. (Eds.), Advances in Neural Information Processing Systems (Vol. 30).

Molnar, C. (2022). Interpretable Machine Learning (2nd ed.). Retrieved from https://christophm .github.io/interpretable-ml-book

Park, J. H., Jo, H. S., Lee, S. H., Oh, S. W., & Na, M. G. (2022). A reliable intelligent diagnostic assistant for nuclear power plants using explainable artificial intelligence of GRU-AE, LightGBM and SHAP. Nuclear Engineering and Technology, 54(4), 1271-1287. doi: 10.1016/j.net.2021.10.024

Redell, N. (2020). ShapML.jl: A Julia package for interpretable machine learning with stochastic Shapley values. Github. Retrieved from https://github .com/nredell/ShapML.jl

Senoner, J., Netland, T., & Feuerriegel, S. (2022). Using Explainable Artificial Intelligence to Improve Process Quality: Evidence from Semiconductor Manufacturing. Management Science, 68(8), 5704-5723. doi: 10.1287/mnsc.2021.4190

Shapley, L. S. (1953). A Value for n-Person Games. In H. W. Kuhn & A. W. Tucker (Eds.), Contributions to the Theory of Games (AM-28), Volume II (p. 307-318). Princeton: Princeton University Press. doi: 10.1515/ 9781400881970-018

Strumbelj, E., & Kononenko, I. (2010). An Efficient Explanation of Individual Classifications Using Game Theory. Journal of Machine Learning Research, 11, 1–18.

Strumbelj, E., & Kononenko, I. (2014). Explaining Prediction ˇ Models and Individual Predictions with Feature Contributions. Knowledge and Information Systems, 41(3), 647–665. doi: 10.1007/s10115-013-0679-x

Xu, Z., & Saleh, J. H. (2021). Machine learning for reliability engineering and safety applications: Review of current status and future opportunities. Reliability Engineering & System Safety, 211, 107530. doi: 10.1016/j.ress.2021.107530
Section
Regular Session Papers