Natural Language Processing for Risk, Resilience, and Reliability

##plugins.themes.bootstrap3.article.main##

##plugins.themes.bootstrap3.article.sidebar##

Published Jun 27, 2024
Jean Meunier-Pion

Abstract

Natural Language Processing (NLP) has seen a surge in recent years, especially with the introduction of transformer architectures, relying on the now famous self-attention mechanism. Especially, with the rise of Large Language Models (LLM), propelled by the appearance of ChatGPT in 2022, a new hope of extracting relevant information from text has emerged. In the meantime, natural language data have not often been used in risk, resilience, and reliability tasks. However, text data containing reliability-related information, that can be used to monitor health information regarding complex systems, are available in several and diverse shapes. Indeed, text data can either contain theoretical expert knowledge (technical reports, documentation, Failure Modes and Effects Analysis (FMEA)), or in-practice expert knowledge (incident reports, maintenance work orders), or in-practice non-expert knowledge (customer feedback, news articles). Critical infrastructures, such as nuclear powerplants, railway networks, or electrical power grids, are complex systems for which any failure would induce severe consequences affecting many people. Such systems have the advantage of serving many users, thus having many possible text sources from which technical information and past incident data can be mined for anticipating future failures and generating responses to catastrophic scenarios. The goal of this work is to develop methods and apply state-of-the-art NLP techniques to text data relating to critical infrastructures and failures, to (1) mine information from unstructured language data, and (2) structure the extracted information. Preliminary experiments were conducted on customer review data and incident reports, and show promising performance for failure detection from text data with transformers, as well as incident-related information extraction using LLMs.

How to Cite

Meunier-Pion, J. . (2024). Natural Language Processing for Risk, Resilience, and Reliability. PHM Society European Conference, 8(1), 4. https://doi.org/10.36001/phme.2024.v8i1.3956
Abstract 186 | PDF Downloads 131

##plugins.themes.bootstrap3.article.details##

Keywords

Natural Language Processing, Large Language Models, Reliability, Critical infrastructures, Information extraction, Knowledge base

References
Austin, J., Odena, A., Nye, M., Bosma, M., Michalewski, H., Dohan, D., Jiang, E., Cai, C., Terry, M., Le, Q., & Sutton, C. (2021). Program Synthesis with Large Language Models. Preprint. Google Research. Brundage, M. P., Sexton, T., Hodkiewicz, M., Dima, A., & Lukens, S. (2021). Technical language processing: Unlocking maintenance knowledge. Manufacturing Letters, vol. 27, pp. 42-46. doi:

10.1016/j.mfglet.2020.11.001

Chen, M., et al. (2021). Evaluating Large Language Models Trained on Code. Preprint. OpenAI, San Francisco, California, USA. Du, X., Liu, M., Wang, K., Wang, H., Liu, J., Chen, Y., Feng, J., Sha, C., Peng, X., & Lou Y. (2021). ClassEval A Manually-Crafted Benchmark for Evaluating LLMs on Class-level Code Generation. Preprint. Fudan University, Shanghai, China. He, P., Gao, J., & Chen, W. (2023). DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing. Preprint. Microsoft Azure AI. Huang, Q., Wu, G., & Li, Z. T. (2021). Design for Reliability Through Text Mining and Optimal Product Verification and Validation Planning. IEEE Transactions on Reliability, vol. 70, pp. 231-247. doi:

10.1109/TR.2019.2938151

Analysis. 2018 IEEE International Conference on Prognostics and Health Management (ICPHM). June. doi: 10.1109/ICPHM.2018.8448909 Meunier-Pion, J., Zeng, Z., & Liu, J. (2021). Big Data Analytics for Reputational Reliability Assessment Using Customer Review Data. Proceedings of the 31st European Safety and Reliability Conference (ESREL 2021). September 19-23, Angers, France. pp.2336-2343 Montero Jiménez, J. J., Vingerhoeds, R., Grabot, B., & Schwartz, S. (2023). An ontology model for maintenance strategy selection and assessment. Journal of Intelligent Manufacturing, vol. 34, pp. 1369-1387. doi: 10.1007/s10845-021-01855-3 Sharp, M., Sexton, T., & Brundage, M. P. (2017). Towards Semi-autonomous Information: Extraction for Unstructured Maintenance Data in Root Cause Analysis. IFIP International Conference on Advances in Production Management Systems. March 9, London, England. pp.425-432 Stenström, C., Aljumaili, M., & Parida, A. (2015). Natural Language Processing of Maintenance Records Data. International Journal of Condition Monitoring and Diagnostic Engineering Management, vol. 18, pp. 3337. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., & Polosukhin, I. (2017), Attention is All you Need. Advances in Neural Information Processing Systems.

Li, Z., & Wu, J. (2018), A Text Mining based Reliability Analysis Method in Design Failure Mode and Effect
Section
Doctoral Symposium