MOXAI – Manufacturing Optimization through Model-Agnostic Explainable AI and Data-Driven Process Tuning

##plugins.themes.bootstrap3.article.main##

##plugins.themes.bootstrap3.article.sidebar##

Published Jun 27, 2024
Clemens Heistracher Anahid Wachsenegger Axel Weißenfeld Pedro Casas

Abstract

Modern manufacturing equipment offers numerous configurable parameters for optimization, yet operators often underutilize them. Recent advancements in machine learning (ML) have introduced data-driven models in industrial settings, integrating key equipment characteristics. This paper evaluates the performance of ML models in classification tasks, revealing nuanced observations. Understanding model decision-making processes in failure detection is crucial, and a guided approach aids in comprehending model failures, although human verification is essential. We introduce MOXAI, a data-driven approach leveraging existing pre-trained ML models to optimize manufacturing machine parameters. MOXAI underscores the significance of explainable artificial intelligence (XAI) in enhancing data-driven process tuning for production optimization and predictive maintenance. MOXAI assists operators in adjusting process settings to mitigate machine failures and production quality degradation, relying on techniques like DiCE for automatic counterfactual generation and LIME to enhance the interpretability of the ML model's decision-making process. Leveraging these two techniques, our research highlights the significance of explaining the model and proposing the recommended parameter setting for improving the process.

How to Cite

Heistracher, C., Wachsenegger, A., Weißenfeld, A., & Casas, P. (2024). MOXAI – Manufacturing Optimization through Model-Agnostic Explainable AI and Data-Driven Process Tuning. PHM Society European Conference, 8(1), 7. https://doi.org/10.36001/phme.2024.v8i1.4111
Abstract 196 | PDF Downloads 96

##plugins.themes.bootstrap3.article.details##

Keywords

Failure Analysis, MML-driven System Tuning, XAI

References
Ameli, M., Becker, P. A., Lankers, K., van Ackeren, M., Bahring, H., & Maaß, W. (2022). Explainable unsupervised multi-sensor industrial anomaly detection and categorization. In 2022 21st ieee international conference on machine learning and applications (icmla) (pp. 1468–1475).

Ates, E., Aksar, B., Leung, V. J., & Coskun, A. K. (2021). Counterfactual explanations for multivariate time series. In 2021 international conference on applied artificial intelligence (icapai) (pp. 1–8).

Bach, S., Binder, A., Montavon, G., Klauschen, F., Muller, K.-R., & Samek, W. (2015). On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7), e0130140.

Chawla, N. V., Bowyer, K. W., Hall, L. O., & Kegelmeyer, W. P. (2002). Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research, 16, 321–357.

Jakubowski, J., Stanisz, P., Bobek, S., & Nalepa, G. J. (2021). Explainable anomaly detection for hot-rolling industrial process. In 2021 ieee 8th international conference on data science and advanced analytics (dsaa) (pp. 1–10).

Jakubowski, J., Stanisz, P., Bobek, S., & Nalepa, G. J. (2022). Roll wear prediction in strip cold rolling with physicsinformed autoencoder and counterfactual explanations. In 2022 ieee 9th international conference on data science and advanced analytics (dsaa) (p. 1-10). doi: 10.1109/DSAA54385.2022.10032357

Jalali, A., Haslhofer, B., Kriglstein, S., & Rauber, A. (2023). Predictability and comprehensibility in post-hoc xai methods: A user-centered analysis. In Science and information conference (pp. 712–733).

Kulesza, A., Taskar, B., et al. (2012). Determinantal point processes for machine learning. Foundations and Trends in Machine Learning, 5(2–3), 123–286.

Lipovetsky, S., & Conklin, M. (2001). Analysis of regression in game theory approach. Applied Stochastic Models in Business and Industry, 17(4), 319-330. Retrieved from https://onlinelibrary.wiley.com/doi/abs/10.1002/asmb.446 doi: https://doi.org/10.1002/asmb.446

Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. In I. Guyon et al. (Eds.), Advances in neural information processing systems (Vol. 30). Curran Associates, Inc. Retrieved from https://proceedings.neurips.cc/paper_files/paper/2017/file/8a20a8621978632d76c43dfd28b67767-Paper.pdf

Matzka, S. (2020). Explainable artificial intelligence for predictive maintenance applications. In 2020 third international conference on artificial intelligence for industries (ai4i) (pp. 69–74).

Molnar, C. (2020). Interpretable machine learning. Lulu.com.

Mothilal, R. K., Sharma, A., & Tan, C. (2020). Explaining machine learning classifiers through diverse counterfactual explanations. In Proceedings of the 2020 conference on fairness, accountability, and transparency (p. 607–617). New York, NY, USA: Association for
Computing Machinery. Retrieved from https://doi.org/10.1145/3351095.3372850 doi:10.1145/3351095.3372850

Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). ”Why Should I Trust You?”: Explaining the Predictions of any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (p. 1135–1144). New York, NY, USA: Association for Computing Machinery. Retrieved from https://doi.org/10.1145/2939672.2939778 doi: 10.1145/2939672.2939778

Schockaert, C., Macher, V., & Schmitz, A. (2020). Vae-lime: deep generative model based approach for local datadriven model interpretability applied to the ironmaking industry. arXiv preprint arXiv:2007.10256.

Seiffer, C., Ziekow, H., Schreier, U., & Gerling, A. (2021). Detection of concept drift in manufacturing data with shap values to improve error prediction. In Data analytics 2021: The tenth international conference on data analytics (pp. 51–60).

Senoner, J., Netland, T., & Feuerriegel, S. (2022). Using explainable artificial intelligence to improve process quality: Evidence from semiconductor manufacturing. Management Science, 68(8), 5704–5723.

Shrikumar, A., Greenside, P., & Kundaje, A. (2017). Learning important features through propagating activation differences. In Proceedings of the 34th international conference on machine learning - volume 70 (p. 3145–3153). JMLR.org.

Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., & Torralba, A. (2016). Learning deep features for discriminative localization. In Proceedings of the ieee conference on computer vision and pattern recognition (pp. 2921–2929).
Section
Technical Papers