Interpretable Neural Network with Limited Weights for Constructing Simple and Explainable HI using SHM Data

##plugins.themes.bootstrap3.article.main##

##plugins.themes.bootstrap3.article.sidebar##

Published Oct 28, 2022
Morteza Moradi Panagiotis Komninos Rinze Benedictus Dimitrios Zarouchas

Abstract

Recently, companies all over the world have been focusing on the improvement of autonomous health management systems in order to enhance performance and reduce downtime costs. To achieve this, the remaining useful life predictions have been given remarkable attention. These predictions depend on the proper designing process and the quality of health indicators (HI) generated from structural health monitoring sensors based on prior established multiple prognostic evaluation criteria. Constructing such HIs from noisy sensory data demands powerful models that enable the automatic selection and fusion of features taken from those relevant measurements. Deep learning models are promising to autonomously extract features in scenarios with a huge volume of data without requiring considerable domain expertise. Nonetheless, the features established by artificial neural networks are complicated to comprehend and cannot be regarded as physical system characteristics. In this regard, the goal of this paper is to extend a new model; an interpretable artificial neural network that enables the automatic selection and fusion of features to construct the most appropriate HIs with remarkably fewer parameters. This model consists of additive and multiplicative layers that provide a feature fusion that better reflects the system’s physical properties. Additionally, the weights are discretized in two ways: a) using a ternary form with values {-1, 0, 1}, and b) relaxing the aforementioned ternary form by rounding the weights at the first decimal point in the range of [-1, 1]. Both discretization techniques have the ability to softly control the number of parameters that should be ignored. This trick guarantees interpretability for the neural network by extracting simple yet powerful equations representing the constructed HIs. Finally, the model’s performance is evaluated and compared with other approaches using a practical case study. The results show that the proposed approach's designed HIs are both interpretable and of high quality according to the criteria of the HI's evaluation

How to Cite

Moradi, M., Komninos, P., Benedictus, R., & Zarouchas, D. (2022). Interpretable Neural Network with Limited Weights for Constructing Simple and Explainable HI using SHM Data. Annual Conference of the PHM Society, 14(1). https://doi.org/10.36001/phmconf.2022.v14i1.3185
Abstract 161 | PDF Downloads 56

##plugins.themes.bootstrap3.article.details##

Keywords

Prognostics and health management, Structural health monitoring, Intelligent health indicator, Interpretable neural network

Section
Technical Papers