A monitoring method for detecting and localizing overheat, smoke and fire faults in wind turbine nacelle

##plugins.themes.bootstrap3.article.main##

##plugins.themes.bootstrap3.article.sidebar##

Published Sep 4, 2023
Minsoo Lee Eunchan Do Ki-Yong Oh

Abstract

This study presents a monitoring method that utilizes 3D object classification to accurately detect mechanical and electrical components of a wind turbine by combining a geometric and statistic feature extractor (GSFE) with a multiview approach. The proposed monitoring method also detect outlier after executing object detection to localize overheat faults in these components with fused Optical or Infrared/LiDAR measurements. The proposed method has
three key characteristics. First, the proposed outlier detection allocates two extremes of normal and faulty clusters by using 2D object classification/detection model or measuring the
standard deviation of temperature with sensor fusing measurements. Specifically, the outlier detection with sensor fusing measurements extracts the position coordinates and
temperature data to localize overheat faults, effectively detecting an overheat component. Second, the GSFE utilizes a group sampling approach to extract the local geometric feature information from neighboring point clouds, aggregating normal vectors and standard deviation. This method ensures the high accuracy of object classification. Third, a multi-view approach focuses on updating local geometric and statistic features through a graph convolution network, improving the accuracy and robustness of object classification. The proposed outlier detection is verified through overheat/fire field tests. The effectiveness of the proposed 3D object classification method is also validated by using a virtual wind turbine nacelle CAD dataset and a public CAD dataset named ModelNet40. Consequently, the proposed method is practical and effective for monitoring a fire and overheat component because it can accurately detect critical components with only a few virtual datasets because gathering bigdata for training a neural network is extremely difficult.

Abstract 261 | PDF Downloads 278

##plugins.themes.bootstrap3.article.details##

Keywords

Outlier detection, Multi-view approach, Feature extractor, 3D object classification

References
Hornung, A., Wurm, K. M., & Bennewitz, M. (2013), Octomap: an efficient probabilistic 3D mapping framework based on octrees, Autonomous Robotics, vol. 34, pp. 189-206. doi: https://doi.org/10.1007/s10514- 012-9321-0

Jiarong, L., & Fu, Z. (2022), R3live: A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping package, 2022 International Conference on Robotics and Automation, (p. 10672), 23-27 May, Philadelphia. doi: https://doi.org/10.1109/ICRA46639.2022.9811935

Xin, W., Ruixuan, Y., & Jian, S. (2020), View-GCN: ViewBased Graph Convolutional Network for 3D Shape Analysis, 2020 International Conference on Computer Vision and Pattern Recognition, (p. 10672), 13-19 June, Seattle. doi: 10.1109/CVPR42600.2020.00192

Charles, R. Q., Li, Y., Hao, S., & Leonidas, J., G. (2017), PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space, Advance in Neural Information Processing Systems, vol. 30, doi: https://doi.org/10.48550/arXiv.1706.02413

Charles, R. Q., Li, Y., Hao, S., & Leonidas, J., G. (2017), PointNet: Deep learning on Point Sets for 3d classification and segmentation, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 652-660, doi: 10.48550/ arXiv.1612. 00593

Wang, Y., Sun, Y., Lui, Z., Sarma, S. E., Bronstein, M. M., & Solomon, J. M. (2019), Dynamic graph CNN for learning on point clouds, ACM Transactions on Graphics, vol. 38, pp. 1-12, doi: 10.48550/arXiv.2206.04670

YongCheng, L., Bin, F., Shiming, X., & Chunhong, P. (2019), PointNet: Deep learning on Point Sets for 3d classification and segmentation, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, doi: 10.48550/ arXiv.1904. 07601

Xumin, Y., Lulu, T., Yongming, R., Tiejun, H., Jiew, Z., & Jiwen, L. (2022), Point-BERT: Pre-training 3D point Cloud Transformers with Masked Point Modeling, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, doi: 10.48550/ arXiv.2111. 14819

Ma, X., Qin, C., You, H., Ran, H., & Fu, Y. (2022), Rethinking Network Design and Local Geometry in Point Cloud: A simple Residual MLP Framework, International Conference on Learning Representations, doi: 10.48550/arXiv.2202.07123

Qian, G., Li, Y., Peng, H., Mai, J., Hammoud, H. A. A. K., Elhoseiny, M., & Ghanem, B., (2022), Pointnext: Revisiting pointnet++ with improved training and scaling strategies, Advance in Neural Information Processing Systems, doi: 10.48550/arXiv.2206.04670

Le, X., Mingfei, G., Chen, X., Roberto M., Jiajun, W., Caiming, X., Ran, X., Juan, C. N., & Silvio, S., (2023), ULIP: Learning a Unified Representation of Language, Images, and Point Clouds for 3D Understanding, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 18-22 June, Vancouver, doi: https://doi.org/10.48550/arXiv.1904.07601
Section
Regular Session Papers