Scalable Change Analysis and Representation Using Characteristic Function
##plugins.themes.bootstrap3.article.main##
##plugins.themes.bootstrap3.article.sidebar##
Abstract
In this paper, we propose a novel framework to help human operators- who are domain experts but not necessarily familiar with statistics- analyze a complex system and find unknown changes and causes. Despite the prevalence, researchers have rarely tackled this problem. Our framework focuses on the representation and explanation of changes occurring between two datasets, specifically the normal data and data with the observed changes. We employ two-dimensional scatter plots which can provide comprehensive representation without requiring statistical knowledge. This helps a human operator to intuitively understand the change and the cause. An analysis to find two-attribute pairs whose scatter plots well explain the change does not require high computational complexity owing to the novel characteristic function-based approach. Although a hyper-parameter needs to be determined, our analysis introduces a novel appropriate prior distribution to determine the proper hyper-parameter automatically. The experimental results show that our method presents the change and the cause with the same accuracy as that of the state-of-the-art kernel hypothesis testing approaches, while reducing the computational costs by almost 99% at the maximum for all popular benchmark datasets. The experiment using real vehicle driving data demonstrates the practicality of our framework.
##plugins.themes.bootstrap3.article.details##
anomaly detection, characteristic functon, change analysis
Chandola, V., Banerjee, A., & Kumar, V. (2009). /em anomaly detection: A survey. ACM Comput. Surv., 41(3), 15:1–15:58.
Chen-Jen, K., & Terrence, L. (2005, May). Testing for stochastic independence: application to blind source separation. IEEE Transactions on Signal Processing, 53(5), 1815-1826.
Dua, D., & Graff, C. (2017). UCI machine learning repository. Retrieved from http://archive.ics.uci.edu/ml
Franceschini, A., & Fasano, G. (1987, 03). A multidimensional version of the Kolmogorov–Smirnov test. Monthly Notices of the Royal Astronomical Society, 225(1), 155-170.
Gretton, A., Borgwardt, K. M., Rasch, M. J., Sch¨olkopf, B., & Smola, A. (2012, March). A kernel two-sample test. J. Mach. Learn. Res., 13(1), 723–773.
Gretton, A., Fukumizu, K., Teo, C. H., Song, L., Sch¨olkopf, B., & Smola, A. J. (2008). A kernel statistical test of independence. In Advances in neural information processing systems 20 (pp. 585–592). Curran Associates, Inc.
Grinstein, G., Trutschl, M., & Cvek, U. (2001). Highdimensional visualizations. In Proceedings of the data mining conference (kdd).
He, B., Yang, X., Chen, T., & Zhang, J. (2012, 08). Reconstruction-based multivariate contribution analysis for fault isolation: A branch and bound approach. Journal of Process Control, 22, 1228-1236.
Hido, S., Id´e, T., Kashima, H., Kubo, H., & Matsuzawa, H. (2008). Unsupervised change analysis using supervised learning. In T. Washio, E. Suzuki, K. M. Ting, & A. Inokuchi (Eds.), Advances in knowledge discovery and data mining (pp. 148–159). Berlin, Heidelberg: Springer Berlin Heidelberg.
Hotelling, H. (1992). The generalization of student’s ratio. In S. Kotz & N. L. Johnson (Eds.), Breakthroughs in statistics: Foundations and basic theory (pp. 54–65). New York, NY: Springer New York.
Hyv¨arinen, A., & Oja, E. (2000, May). Independent component analysis: Algorithms and applications. Neural Netw., 13(4-5), 411–430.
Joe Qin, S. (2003). Statistical process monitoring: basics and beyond. Journal of Chemometrics, 17(8-9), 480-502.
Leban, G., Zupan, B., Vidmar, G., & Bratko, I. (2006, Sep 01). Vizrank: Data visualization guided by machine learning. Data Mining and Knowledge Discovery, 13(2), 119–136.
Lehmann, E. L., & Romano, J. P. (2005). Testing statistical hypotheses (Third ed.). New York: Springer.
Marimont, R. B., & Shapiro, M. B. (1979, 08). Nearest Neighbour Searches and the Curse of Dimensionality. IMA Journal of Applied Mathematics, 24(1), 59-70.
Silverman, B. (1986). Density estimation for statistics and data analysis. Taylor & Francis.
Smirnov, N. (1948, 06). Table for estimating the goodness of fit of empirical distributions. Ann. Math. Statist., 19(2), 279–281.