Comparing Feature and Trajectory-Based Remaining Useful Life Modeling of Electrical Resistance Heating Wires

##plugins.themes.bootstrap3.article.main##

##plugins.themes.bootstrap3.article.sidebar##

Published Nov 5, 2024
Simon Mählkvist Wilhelm Söderkvist Vermelin Thomas Helander Konstantinos Kyprianidis

Abstract

Industrial heating significantly contributes to global greenhouse gas emissions, accounting for a substantial portion of annual emissions. The transition to fossil-free operations in the heating industry is closely linked to advancements in industrial electrical heating systems, especially those using resistance heating wires. In this context, Prognostics and Health Management is crucial for enhancing system reliability and sustainability through predictive maintenance strategies.

The integration of machine learning technologies into Prognostics and Health Management has significantly improved the precision and applicability of Remaining Useful Life modeling. This improvement enables more accurate predictions of component lifespans, optimizes maintenance schedules, and enhances operational efficiency in industrial heating applications. These developments are essential for reducing greenhouse gas emissions in the sector.

This paper serves as a guide for conducting Remaining Useful Life modeling for industrial batch processes. It evaluates and compares two methodologies: deep learning-based approaches using full time-series data, such as recurrent neural networks and their variants, and feature-engineering-based methods, including random forest regression and support vector machines. Our results show that the feature-oriented approach performs better overall in terms of predictive accuracy and computational efficiency. The study includes a detailed sensitivity analysis and hyperparameter estimation for each method, providing valuable insights into developing robust and transparent Prognostics and Health Management systems. These systems are crucial in supporting the heating industry’s move towards more sustainable and emission-free operations.

The findings reveal that feature-oriented methods are both performant and robust, particularly excelling in handling outliers. The random forest regression model, in particular, demonstrated the highest performance on the test dataset according to the chosen evaluation metrics. Conversely, trajectory-oriented methods exhibited less bias across varying levels of degradation, a helpful characteristic for Prognostics and Health Management systems. While feature-oriented methods tend to systematically underestimate Remaining Useful Life at high true values and overestimate it at low actual values, this issue is less pronounced in trajectory-oriented models. Overall, these insights highlight the strengths and limitations of each approach, guiding the development of more effective and reliable predictive maintenance strategies.

How to Cite

Mählkvist, S., Söderkvist Vermelin, W., Helander, T., & Kyprianidis, K. (2024). Comparing Feature and Trajectory-Based Remaining Useful Life Modeling of Electrical Resistance Heating Wires. Annual Conference of the PHM Society, 16(1). https://doi.org/10.36001/phmconf.2024.v16i1.3913
Abstract 36 | PDF Downloads 33

##plugins.themes.bootstrap3.article.details##

Keywords

remaining useful life, machine learning, industrial heating, electrical resistance heating wires, interpretability, industrial batch processes

References
Bergstra, J., & Bengio, Y. (2012). Random search for hyperparameter optimization. Journal of machine learning research, 13(2).

Boser, B. E., Guyon, I. M., & Vapnik, V. N. (1992). A training algorithm for optimal margin classifiers. In Proceedings of the fifth annual workshop on Computational learning theory (pp. 144–152).

Breiman, L. (2001). Random forests. Machine learning, 45(1), 5–32. (Publisher: Springer) Chaoub, A., Voisin, A., Cerisara, C., & Iung, B. (2021). Learning representations with end-to-end models for improved remaining useful life prognostics. CoRR, abs/2104.05049. Retrieved from https://arxiv .org/abs/2104.05049

Cybenko, G. (1989). Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals and Systems, 2(4), 303–314. Retrieved from https://doi .org /10 .1007/BF02551274 doi: 10.1007/BF02551274

Gal, Y., & Ghahramani, Z. (2016, 20–22 Jun). Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In M. F. Balcan & K. Q. Weinberger (Eds.), Proceedings of the 33rd international conference on machine learning (Vol. 48, pp. 1050–1059). New York, New York, USA: PMLR. Retrieved from https://proceedings .mlr.press/v48/gal16.html

Galar, D., Goebel, K., Sandborn, P., & Kumar, U. (2021). Prognostics and remaining useful life (rul) estimation: Predicting with confidence (1st ed.). CRC Press. Retrieved from https://doi .org/10 .1201/ 9781003097242 doi: 10.1201/9781003097242

Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT Press. (http :// www .deeplearningbook.org)

Haykin, S. (1994). Neural networks: a comprehensive foundation. Prentice Hall PTR.

He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In 2016 ieee conference on computer vision and pattern recognition (cvpr) (p. 770-778). doi: 10.1109/CVPR.2016.90

He, Q. P., & Wang, J. (2011). Statistics pattern analysis: A new process monitoring framework and its application to semiconductor batch processes. AIChE Journal, 57(1), 107–121. Retrieved 2022-09-07, from http://onlinelibrary .wiley .com/doi/ abs/10.1002/aic.12247 doi: 10 .1002/aic .12247

Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735–1780.

Hornik, K., Stinchcombe, M., & White, H. (1989). Multilayer feedforward networks are universal approximators. Neural Networks, 2(5), 359-366. Retrieved from https :// www .sciencedirect .com / science/article/pii/0893608089900208 doi: https://doi.org/10.1016/0893-6080(89)90020-8

James, G., Witten, D., Hastie, T., & Tibshirani, R. (2013). An introduction to statistical learning (Vol. 112). Springer

LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444. Retrieved from https://doi .org/10 .1038/nature14539 doi: 10.1038/nature14539

Li, L., Jamieson, K., Rostamizadeh, A., Gonina, E., Hardt, M., Recht, B., & Talwalkar, A. (2020). A system for massively parallel hyperparameter tuning.

Liaw, R., Liang, E., Nishihara, R., Moritz, P., Gonzalez, J. E., & Stoica, I. (2018). Tune: A research platform for distributed model selection and training. arXiv preprint arXiv:1807.05118.

Mahlkvist, S., Ejenstam, J., & Kyprianidis, K. (2022, March). Consolidating Industrial Batch Process Data for Machine Learning. In (pp. 76–83). Retrieved 2022- 12-10, from https://ecp.ep.liu.se/index .php/sims/article/view/330 doi: 10.3384/ ecp2118576

Mahlkvist, S., Ejenstam, J., & Kyprianidis, K. (2023). Cost-sensitive decision support for industrial batch processes. Sensors, 23(23), 9464. (Publisher: MDPI)

Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., . . . Chintala, S. (2019). Pytorch: An imperative style, high-performance deep learning library. In Advances in neural information processing systems 32 (pp. 8024–8035). Curran Associates, Inc. Retrieved from http://papers .neurips .cc / paper / 9015 -pytorch -an -imperative -style -high -performance -deep-learning-library.pdf

Pecht, M., & Kang, M. (2018). Prognostics and health management of electronics : fundamentals, machine learning, and internet of things (Second edition. ed.). Hoboken, New Jersey: John Wiley & Sons.

Rendall, R., Chiang, L. H., & Reis, M. S. (2019, May). Data-driven methods for batch data analysis – A critical overview and mapping on the complexity scale. Computers & Chemical Engineering, 124, 1–13. Retrieved 2022-11-11, from https://linkinghub.elsevier.com/ retrieve/pii/S0098135418311104 doi: 10 .1016/j.compchemeng.2019.01.014

Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations by back-propagating errors. Nature, 323(6088), 533–536. Retrieved from https://doi.org/10.1038/323533a0 doi: 10.1038/323533a0

Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(56), 1929–1958. Retrieved from http://jmlr.org/papers/v15/ srivastava14a.html

Van Rossum, G., & Drake, F. L. (2009). Python 3 reference manual. Scotts Valley, CA: CreateSpace.

Wang, J., & He, Q. P. (2010, September). Multivariate Statistical Process Monitoring Based on Statistics Pattern Analysis. Industrial & Engineering Chemistry Research, 49(17), 7858–7869. Retrieved 2022- 09-16, from https://pubs.acs.org/doi/10 .1021/ie901911p doi: 10.1021/ie901911p

Wold, S., Kettaneh-Wold, N., MacGregor, J., & Dunn, K. (2009). Batch Process Modeling and MSPC. In Comprehensive Chemometrics (pp. 163–197). Elsevier. Retrieved 2022-09-07, from https://linkinghub.elsevier.com/ retrieve/pii/B9780444527011001083 doi: 10.1016/B978-044452701-1.00108-3

Yoro, K. O., & Daramola, M. O. (2020). Chapter 1 - CO2 emission sources, greenhouse gases, and the global warming effect. In M. R. Rahimpour, M. Farsi, & M. A. Makarem (Eds.), Advances in carbon capture (pp. 3–28). Woodhead Publishing. Retrieved from https://www .sciencedirect .com/science/article/ pii/B9780128196571000013 doi: https://doi .org/10.1016/B978-0-12-819657-1.00001-3
Section
Technical Research Papers