Validating Machine-learned Diagnostic Classifiers in Safety Critical Applications with Imbalanced Populations

##plugins.themes.bootstrap3.article.main##

##plugins.themes.bootstrap3.article.sidebar##

Published Sep 24, 2018
Daniel Wade Andrew Wilson Abraham Reddy Raj Bharadwaj

Abstract

Data science techniques such as machine learning are rapidly becoming available to engineers building models from system data, such as aircraft operations data. These techniques require validation for use in fielded systems providing recommendations to operators or maintainers. The methods for validating and testing machine learned algorithms generally focus on model performance metrics such as accuracy or F1-score. Many aviation datasets are highly imbalanced, which can invalidate some underlying assumptions of machine learning models. Two simulations are performed to show how some common performance metrics respond to imbalanced populations. The results show that each performance metric responds differently to a sample depending on the imbalance ratio between two classes. The results indicate that traditional methods for repairing underlying imbalance in the sample may not provide the rigorous validation necessary in safety critical applications. The two simulations indicate that authorities must be cautious when mandating metrics for model acceptance criteria because they can significantly influence the model parameters.

How to Cite

Wade, D., Wilson, A., Reddy, A., & Bharadwaj, R. (2018). Validating Machine-learned Diagnostic Classifiers in Safety Critical Applications with Imbalanced Populations. Annual Conference of the PHM Society, 10(1). https://doi.org/10.36001/phmconf.2018.v10i1.192
Abstract 555 | PDF Downloads 617

##plugins.themes.bootstrap3.article.details##

Keywords

machine-learned model validation

Section
Technical Research Papers