A Case Study Comparing ROC and PRC Curves for Imbalanced Data

##plugins.themes.bootstrap3.article.main##

##plugins.themes.bootstrap3.article.sidebar##

Published Oct 26, 2023
Dan Watson Dr. Karl Reichard Aaron Isaacson

Abstract

Receiver operating characteristic curves are a mainstay in binary classification and have seen widespread use from their inception characterizing radar receivers in 1941. Widely used and accepted, the ROC curve is the default option for many application spaces. Building on prior work the Prognostics and Health Management community naturally adopted ROC curves to visualize classifier performance. While the ROC curve is perhaps the best known visualization of binary classifier performance it is not the only game in town. Authors from across various STEM fields have  published works extolling various other metrics and visualizations in binary classifier performance evaluation. These include, but are not limited

to, the precision recall characteristic curve, area under the curve metrics, bookmaker informedness and markedness. This paper will review these visualizations and metrics, provide references for more exhaustive treatments on them, and provide a case study of their use on an imbalanced prognostic health management data-set. Prognostic health management binary classification problems are often highly imbalanced with a low prevalence of positive (faulty) cases compared to negative (nominal/healthy) cases. In the presented data-set, time domain accelerometer data for a series of run-to-failure ball-on-disk scuffing tests provide a case where the vast majority of data, > 94%, is from nominally healthy data instances. A condition indicator algorithm targeting the hypothesized physical system response is validated compared to less informed classifiers. Several characteristic curves are then used to showcase the performance improvement of the physics informed condition indicator.

How to Cite

Watson, D., Reichard, K., & Isaacson, A. (2023). A Case Study Comparing ROC and PRC Curves for Imbalanced Data. Annual Conference of the PHM Society, 15(1). https://doi.org/10.36001/phmconf.2023.v15i1.3479
Abstract 328 | PDF Downloads 286

##plugins.themes.bootstrap3.article.details##

Keywords

ROC, PRC, Scuffing, Wear, Fault, Envelope, Optimization, Decision logic, Amplitude demodulation

References
Bradley, A. P. (1997). The use of the area under the roc curve in the evaluation of machine learning algorithms. Pattern Recognition, 30(7), 1145-1159. doi:https://doi.org/10.1016/S0031-3203(96)00142-2

Chicco, D., Totsch, N., & Jurman, G. (2021). The matthews correlation coefficient (mcc) is more reliable than balanced accuracy, bookmaker informedness, and markedness in two-class confusion matrix evaluation. BioData Mining, 14, 1-22. doi: 10.1186/s13040-021-00244-z

Eklund, N. (2022). Phm 2022 analytics short course.

Fawcett, T. (2006). An introduction to roc analysis. Pattern Recognition Letters, 27, 861-874. (ROC Analysis in Pattern Recognition) doi: https://doi.org/10.1016/j.patrec.2005.10.010

Hand, D. J. (2009, 10). Measuring classifier performance: A coherent alternative to the area under the roc curve. Machine Learning, 77, 103-123. doi: 10.1007/s10994-009-5119-5

Ludema, K. C. (1984). A review of scuffing and running-in of lubricated surfaces, with asperities and oxides in perspective (Vol. 100).

Powers, D. M. W. (2003). Recall and precision versus the bookmaker. , 529. doi: 10.13140/RG.2.1.3754.1926

Powers, D. M. W. (2008, 6). Evaluation: From precision, recall and f-factor to roc, informedness, markedness & correlation. Mach. Learn. Technol., 2.

Takaya, M. S., & Rehmsmeier. (2015, 6). The precision-recall plot is more informative than the roc plot when evaluating binary classifiers on imbalanced datasets. PLOS ONE, 10, 1-21. doi: 10.1371/journal.pone.0118432
Section
Technical Research Papers

Most read articles by the same author(s)