Anomaly Detection on Time Series with Wasserstein GAN applied to PHM

##plugins.themes.bootstrap3.article.main##

##plugins.themes.bootstrap3.article.sidebar##

Published Jun 4, 2023
Mélanie Ducoffe Ilyass Haloui Jayant Sen Gupta

Abstract

Modern vehicles are more and more connected. For instance, in the aerospace industry, newer aircraft are already equipped with data concentrators and enough wireless connectivity to transmit sensor data collected during the whole flight to the ground, usually when the airplane is at the gate. Moreover, platforms that were not designed with such capability can be retrofitted to install devices that enable wireless data collection,
as is done on Airbus A320 family. For military and heavy helicopters, HUMS (Health and Usage Monitoring System) also allows the collection of sensor data. Finally, satellites send continuously to the ground sensor data, called telemetries. Most of the time, fortunately, the platforms behave normally, faults and failures are thus rare. In order to go beyond corrective or preventive maintenance, and anticipate future faults and failures, we have to look for any drift, any change, in systems’ behavior, in data that is normal almost all the time. Moreover, collected sensor data is time series data. The problem is then anomaly detection or novelty detection in time series data. Among machine learning techniques that can be used to analyze data, Deep Learning, especially Convolutional Neural Networks, is very popular since it has surpassed human capacities for image classification and object detection. In this field, Generative Adversarial Networks are a technique to generate data similar to a potentially high dimension original dataset. In our case, generate new data could be useful to enrich the learning dataset with generated abnormal data to make it less unbalanced. Yet we are more interested in the potential of such techniques to perform anomaly detection for high dimensional data, comparing newly observed data with data that could have been generated from a distribution built from normal examples.

Abstract 4599 | PDF Downloads 1071

##plugins.themes.bootstrap3.article.details##

Keywords

Prognostics, Deep Learning, Recurrent Neural Networks, Attention

References
An, J., & Cho, S. (2015). Variational autoencoder based anomaly detection using reconstruction probability. Special Lecture on IE, 2, 1–18.
Arjovsky, M., Chintala, S., & Bottou, L. (2017). Wasserstein gan. arXiv preprint arXiv:1701.07875. Borji, A. (2019). Pros and cons of gan evaluation measures. Computer Vision and Image Understanding, 179, 41–65.
Brenier, Y. (1991). Polar factorization and monotone rearrangement of vector-valued functions. Communications on pure and applied mathematics, 44(4), 375–417.
Breunig, M., Kriegel, H.-P., Ng, R., & Sander, J. (2000). Lof: identifying density-based local outliers. , 29(2), 93-104.
Chalapathy, R., Menon, A. K., & Chawla, S. (2018). Anomaly detection using one-class neural networks. arXiv preprint arXiv:1802.06360.
Chandola, V., Cheboli, D., & Kumar, V. (2009). Detecting anomalies in a time series database.
Chen, Y., Zhou, X. S., & Huang, T. S. (2001). One-class svm for learning in image retrieval. In Icip (1) (pp. 34–37).
Cuturi, M. (2013). Sinkhorn distances: Lightspeed computation of optimal transport. In Advances in neural information processing systems (pp. 2292–2300).
Deecke, L., Vandermeulen, R., Ruff, L., Mandt, S., & Kloft, M. (2018). Image anomaly detection with generative adversarial networks. In Joint european conference on machine learning and knowledge discovery in databases (pp. 3–17).
Donahue, J., Krähenbühl, P., & Darrell, T. (2016). Adversarial feature learning. arXiv preprint arXiv:1605.09782.
Erfani, S. M., Rajasegarar, S., Karunasekera, S., & Leckie, C. (2016). High-dimensional and large-scale anomaly detection using a linear one-class svm with deep learning. Pattern Recognition, 58, 121–134.
Ferraty, F., & Vieu, P. (2006). Nonparametric functional data analysis: theory and practice. Springer Science & Business Media.
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., . . . Bengio, Y. (2014). Generative adversarial nets. In Advances in neural information processing systems (pp. 2672–2680).
Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., & Courville, A. C. (2017). Improved training of wasserstein gans. In Advances in neural information processing systems (pp. 5767–5777).
Kolouri, S., Nadjahi, K., Simsekli, U., Badeau, R., & Rohde, G. K. (2019). Generalized sliced wasserstein distances. arXiv preprint arXiv:1902.00434.
Li, D., Chen, D., Shi, L., Jin, B., Goh, J., & Ng, S.-K. (2019). Mad-gan: Multivariate anomaly detection for time series data with generative adversarial networks. arXiv preprint arXiv:1901.04997.
Liu, F. T., Ting, K. M., & Zhou, Z.-H. (2008). Isolation forest. In 2008 eighth ieee international conference on data mining (pp. 413–422).
Masci, J., Meier, U., Cire¸san, D., & Schmidhuber, J. (2011). Stacked convolutional auto-encoders for hierarchical feature extraction. In International conference on artificial neural networks (pp. 52–59).
Miyato, T., Kataoka, T., Koyama, M., & Yoshida, Y. (2018). Spectral normalization for generative adversarial networks. arXiv preprint arXiv:1802.05957.
Schlegl, T., Seeböck, P., Waldstein, S. M., Langs, G., & Schmidt-Erfurth, U. (2019). f-anogan: Fast unsupervised anomaly detection with generative adversarial networks. Medical image analysis, 54, 30–44.
Schlegl, T., Seeböck, P.,Waldstein, S. M., Schmidt-Erfurth, U., & Langs, G. (2017). Unsupervised anomaly detection with generative adversarial networks to guide marker discovery. In International conference on information processing in medical imaging (pp. 146–157).
Theis, L., Oord, A. v. d., & Bethge, M. (2015). A note on the evaluation of generative models. arXiv preprint arXiv:1511.01844.
Tolani, D., Yasar, M., Ray, A., & Yang, V. (2006). Anomaly detection in aircraft gas turbine engines. Journal of Aerospace Computing, Information, and Communication, 3(2), 44–51.
Weed, J., & Bach, F. (2017). Sharp asymptotic and finitesample rates of convergence of empirical measures in wasserstein distance. arXiv preprint arXiv:1707.00087.
Wolf, L., Benaim, S., & Galanti, T. (2018). Unsupervised learning of the set of local maxima.
Yazdi, S. V., Douzal-Chouakria, A., Gallinari, P., & Moussallam, M. (2018). Time warp invariant dictionary learning for time series clustering: application to music data stream analysis. In Joint european conference on machine learning and knowledge discovery in databases (pp. 356–372).
Zenati, H., Foo, C. S., Lecouat, B., Manek, G., & Ramaseshan Chandrasekhar, V. (2018, February). Efficient GANBased Anomaly Detection. ArXiv e-prints.
Section
Technical Papers