Isolation-based feature Selection for Unsupervised Outlier Detection



Published Sep 22, 2019
Qibo Yang Jaskaran Singh Jay Lee


For high-dimensional datasets, bad features and complex interactions between features can cause high computational costs and make outlier detection algorithms inefficient. Most feature selection methods are designed for supervised classification and regression, and limited works are specifically for unsupervised outlier detection. This paper proposes a novel isolation-based feature selection (IBFS) method for unsupervised outlier detection. It is based on the training process of isolation forest. When a point of a feature is used to split the data, the imbalanced distribution of split data is measured and used to quantify how strong this feature can detect outliers. We also compare the proposed method with variance, Laplacian score and kurtosis. These methods are benchmarked on simulated data to show their characteristics. Then we evaluate the performance using one-class support vector machine, isolation forest and local outlier factor on several real-word datasets. The evaluation results show that the proposed method can improve the performance of isolation forest, and its results are similar to and sometimes better than another useful outlier indicator: kurtosis, which demonstrate the effectiveness of the proposed method. We also notice that sometimes variance and Laplacian score has similar performance on the datasets.

How to Cite

Yang, Q., Singh, J., & Lee, J. (2019). Isolation-based feature Selection for Unsupervised Outlier Detection. Annual Conference of the PHM Society, 11(1).
Abstract 705 | PDF Downloads 1322



feature selection, outlier detection, isolation forest

Technical Research Papers

Most read articles by the same author(s)