On Adversarial Vulnerability of PHM algorithms – An initial Study

##plugins.themes.bootstrap3.article.main##

##plugins.themes.bootstrap3.article.sidebar##

Published Nov 24, 2021
Weizhong Yan Zhaoyuan Yang Jianwei Qiu

Abstract

With proliferation of deep learning (DL) applications in diverse domains, vulnerability of DL models to adversarial attacks has become an increasingly interesting research topic in the domains of Computer Vision (CV) and Natural Language Processing (NLP). DL has also been widely adopted to diverse PHM applications, where data are primarily timeseries sensor measurements. While those advanced DL algorithms/models have resulted in an improved PHM algorithms’ performance, the vulnerability of those PHM algorithms to adversarial attacks has not drawn much attention in the PHM community. In this paper we attempt to explore the vulnerability of PHM algorithms. More specifically, we investigate the strategies of attacking PHM algorithms by considering several unique characteristics associated with time-series sensor measurements data. We use two real-world PHM applications as examples to validate our attack strategies and to demonstrate that PHM algorithms indeed are vulnerable to adversarial attacks.

How to Cite

Yan, W., Yang, Z., & Qiu, J. (2021). On Adversarial Vulnerability of PHM algorithms – An initial Study. Annual Conference of the PHM Society, 13(1). https://doi.org/10.36001/phmconf.2021.v13i1.3057
Abstract 362 | PDF Downloads 263

##plugins.themes.bootstrap3.article.details##

Keywords

adversarial machine learning, time-series, prognostics, security, Vulnerability

Section
Technical Research Papers