Counterfactual Explanation for Auto-Encoder Based Time-Series Anomaly Detection

##plugins.themes.bootstrap3.article.main##

##plugins.themes.bootstrap3.article.sidebar##

Published Jun 27, 2024
Abishek Srinivasan Varun Juan Carlos Andresen Anders Holst

Abstract

The complexity of modern electro-mechanical systems require the development of sophisticated diagnostic methods like anomaly detection capable of detecting deviations. Conventional anomaly detection approaches like signal processing and statistical modelling often struggle to effectively handle the intricacies of complex systems, particularly when dealing with multi-variate signals. In contrast, neural network-based anomaly detection methods, especially Auto-Encoders, have emerged as a compelling alternative, demonstrating remarkable performance. However, Auto-Encoders exhibit inherent opaqueness in their decision-making processes, hindering their practical implementation at scale. Addressing this opacity is essential for enhancing the interpretability and trustworthiness of anomaly detection models. In this work, we address this challenge by employing a feature selector to select features and counterfactual explanations to give a context to the model output. We tested this approach on the SKAB benchmark dataset and an industrial time-series dataset. The gradient based counterfactual explanation approach was evaluated via validity, sparsity and distance measures. Our experimental findings illustrate that our proposed counterfactual approach can offer meaningful and valuable insights into the model decision-making process, by explaining fewer signals compared to conventional approaches. These insights enhance the trustworthiness and interpretability of anomaly detection models.

How to Cite

Srinivasan, A., Singapura Ravi , V., Andresen, J. C. ., & Holst, A. . (2024). Counterfactual Explanation for Auto-Encoder Based Time-Series Anomaly Detection. PHM Society European Conference, 8(1), 9. https://doi.org/10.36001/phme.2024.v8i1.4087
Abstract 244 | PDF Downloads 182

##plugins.themes.bootstrap3.article.details##

Keywords

Counterfactual Explanation, Auto-Encoder, Time-series

References
Antwarg, L., Miller, R. M., Shapira, B., & Rokach, L. (2021). Explaining anomalies detected by autoencoders using shapley additive explanations. Expert systems with applications, 186, 115736.

Chakraborttii, C., & Litz, H. (2020). Improving the accuracy, adaptability, and interpretability of ssd failure prediction models. In Proceedings of the 11th acm symposium on cloud computing (pp. 120–133).

Haldar, S., John, P. G., & Saha, D. (2021). Reliable counterfactual explanations for autoencoder based anomalies. In Proceedings of the 3rd acm india joint international conference on data science & management of data (8th acm ikdd cods & 26th comad) (pp. 83–91).

Katser, I. D., & Kozitsin, V. O. (2020).

Skoltech anomaly benchmark (skab). https://www.kaggle.com/dsv/1693952. Kaggle. doi: 10.34740/KAGGLE/DSV/1693952 Li, Z., Zhu, Y., & Van Leeuwen, M. (2023). A survey on explainable anomaly detection. ACM Transactions on Knowledge Discovery from Data, 18(1), 1–54. Molnar, C. (2020). Interpretable machine learning. Lulu.

A2. Confusion Matrix like expression for validity using our approach

In this section, we show valid samples in a confusion-matrix like setting for SKAB and real-world industrial dataset are presented in Table 6 and Table 7 respectively.

Table 6. Validity confusion Matrix for SKAB test data.

com.

Prediction outcome Valid Not Valid

1885

903

1068

505

2953

1048

Total

Mothilal, R. K., Sharma, A., & Tan, C. (2020). Explaining machine learning classifiers through diverse counterfactual explanations. In Proceedings of the 2020 conference on fairness, accountability, and transparency (pp. 607–617).

Schmidl, S., Wenig, P., & Papenbrock, T. (2022). Anomaly detection in time series: a comprehensive evaluation. Proceedings of the VLDB Endowment, 15(9), 17791797.

Sulem, D., Donini, M., Zafar, M. B., Aubet, F.-X., Gasthaus, J., Januschowski, T., . . . Archambeau, C. (2022). Diverse counterfactual explanations for anomaly detection in time series. arXiv preprint arXiv:2203.11103. Verma, S., Dickerson, J. P., & Hines, K. E. (2020). Counterfactual explanations for machine learning: A review. ArXiv, abs/2010.10596.

Wachter, S., Mittelstadt, B., & Russell, C. (2017). Counterfactual explanations without opening the black box: Automated decisions and the gdpr. Harv. JL & Tech., 31, 841.
Section
Technical Papers