Self-supervised Learning for Efficient Remaining Useful Life Prediction



Published Oct 29, 2022
Wilhelm Söderkvist Vermelin Andreas Lövberg Konstantinos Kyprianidis


Canonical deep learning-based Remaining Useful Life (RUL) prediction relies on supervised learning methods which in turn requires large data sets of run-to-failure data to ensure model performance. In a large class of cases, run-to-failure data is difficult to collect in practice as it may be expensive and unsafe to operate assets until failure. As such, there is a need to leverage data that are not run-to-failure but may still contain some measurable, and thus learnable, degradation signal. In this paper, we propose utilizing self-supervised learning as a pretraining step to learn representations of the data which will enable efficient training on the downstream task of RUL prediction. The self-supervised learning task chosen is time series sequence ordering, a task that involves constructing tuples each consisting of $n$ sequences sampled from the time series and reordered with some probability $p$. Subsequently, a classifier is trained on the resulting binary classification task; distinguishing between correctly ordered and shuffled tuples. The classifier's weights are then transferred to the RUL-model and fine-tuned using run-to-failure data. We show that the proposed self-supervised learning scheme can retain performance when training on a fraction of the full data set. In addition, we show indications that self-supervised learning as a pretraining step can enhance the performance of the model even when training on the full run-to-failure data set. To conduct our experiments, we use a data set of simulated run-to-failure turbofan jet engines.

How to Cite

Söderkvist Vermelin, W., Lövberg, A., & Kyprianidis, K. (2022). Self-supervised Learning for Efficient Remaining Useful Life Prediction. Annual Conference of the PHM Society, 14(1).
Abstract 440 | PDF Downloads 344



self-supervised learning, self-supervised, deep learning, machine learning, remaining useful life, unsupervised learning, neural networks

Poster Presentations