Integrated Multiple-Defect Detection and Evaluation of Rail Wheel Tread Images using Convolutional Neural Networks

##plugins.themes.bootstrap3.article.main##

##plugins.themes.bootstrap3.article.sidebar##

Published May 25, 2021
Alexandre Trilla John Bob-Manuel Benjamin Lamoureux Xavier Vilasis-Cardona

Abstract

The wheel-rail interface is regarded as the most important factor for the dynamic behaviour of a railway vehicle, affecting the safety of the service, the passenger comfort, and the life of the wheelset asset. The degradation of the wheels in contact with the rail is visibly manifest on their treads in the form of defects such as indentations, flats, cavities, etc. To guarantee a reliable rail service and maximise the availability of the rolling-stock assets, these defects need to be constantly and periodically monitored as their severity evolves. This inspection task is usually conducted manually at the fleet level and therefore it takes a lot of human resources. In order to add value to this maintenance activity, this article presents an automatic Deep Learning method to jointly detect and classify wheel tread defects based on smartphone pictures taken by the maintenance team. The architecture of this approach is based on a framework of Convolutional Neural Networks, which is applied to the different tasks of the diagnosis process including the location of the defect area within the image, the prediction of the defect size, and the identification of defect type. With this information determined, the maintenancecriteria rules can ultimately be applied to obtain the actionable results. The presented neural approach has been evaluated with a set of wheel defect pictures collected over the course of nearly two years, concluding that it can reliably automate the condition diagnosis of half the current workload and thus reduce the lead time to take maintenance action, significantly reducing engineering hours for verification and validation. Overall, this creates a platform or significant progress in automated predictive maintenance of rolling stock wheelsets.

Abstract 1319 | PDF Downloads 863

##plugins.themes.bootstrap3.article.details##

Keywords

deep learning, convolutional neural network, wheel tread, railway, image

References
Aji, I. P., and Kusuma, G. P. (2020). Landmark Classification Service Using Convolutional Neural Network and Kubernetes. International Journal of Advanced Trends in Computer Science and Engineering, 9(3), 2817–2823.
Alessi, A., La-Cascia, P., Lamoureux, B., Pugnaloni, M., and Dersin, P. (2016). Health Assessment of Railway Turnouts: A Case Study. Proc. of the Third European Conference of the Prognostics and Health Management Society, 1–8.
Alsallakh, B., Jourabloo, A., Ye, M., Liu, X., and Ren, L. (2018). Do Convolutional Neural Networks Learn Class Hierarchy? IEEE Transactions on Visualization and Computer Graphics, 4, 1–11.
Ammann, O., Michau, G., and Fink, O. (2020). Anomaly Detection And Classification In Time Series With Kervolutional Neural Networks. Proc. of the 30th European Safety and Reliability Conference and the 15th Probabilistic Safety Assessment and Management Conference, 1–8.
Bach, S., Binder, A., Montavon, G., Klauschen, F., M¨uller, K.-R., and Samek, W. (2015). On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation. PLoS ONE, 10(7), 1–46.
Balla, D., Maliosz, M., and Simon, C. (2020). Open Source FaaS Performance Aspects. Proc. of the 43rd International Conference on Telecommunications and Signal Processing, 358–364.
Bau, D., Zhou, B., Khosla, A., Oliva, A., and Torralba, A. (2017). Network Dissection: Quantifying Interpretability of Deep Visual Representations. Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 6541–6549.
Bengio, Y. (2009). Learning Deep Architectures for AI. Foundations and Trends in Machine Learning, 2(1), 1–71.
Biørn-Hansen, A., Majchrzak, T. A., and Grønli, T.-M. (2017). Progressive web apps: The possible web-native unifier for mobile development. Proc. of the International Conference on Web Information Systems and Technologies, 344–351.
CENELEC. (2017). Railway Applications - The Specification and Demonstration of Reliability, Availability, Maintainability and Safety (RAMS) - Part 1: Generic RAMS Process (Tech. Rep. No. 50126-1:2017). European Committee for Electrotechnical Standardization.
Chanthakit, S., Keeratiwintakorn, P., and Rattanapoka, C. (2019). An IoT System Design with Real-Time Stream Processing and Data Flow Integration. Research, Invention, and Innovation Congress, 1–5.
Chen, Z., Bei, Y., and Rudin, C. (2020). Concept Whitening for Interpretable Image Recognition. Nature Machine Intelligence, 2, 772–782.
Cui, Y., Kara, S., and Chan, K. C. (2020). Manufacturing big data ecosystem: A systematic literature review. Robotics and computer-integrated Manufacturing, 62(101861).
Cybenko, G. (1989). Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals and Systems, 2, 303–314.
Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., and Darrell, T. (2013). DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition. arXiv:1310.1531 [cs.CV], 1–10.
Dubitzky, W., Granzow, M., and Berrar, D. (2007). Fundamentals of data mining in genomics and proteomics. Springer Science & Business Media, 178.
Duda, R. O., Hart, P. E., and Stork, D. G. (Ed.). (2001). Pattern Classification. Wiley-Interscience.
Eickenberg, M., Gramfort, A., Varoquaux, G., and Thirion, B. (2017). Seeing it all: Convolutional network layers map the function of the human visual system. NeuroImage, 152, 184–194.
Engstrom, L., Tran, B., Tsipras, D., Schmidt, L., and Madry, A. (2019). Exploring the Landscape of Spatial Robustness. Proc. of the 36 th International Conference on Machine Learning, 1–23.
Erhan, D., Bengio, Y., Courville, A., and Vincent, P. (2009). Visualizing Higher-Layer Features of a Deep Network. University of Montreal - Technical Report 1341, 1–14.
Fink, O., Wang, Q., Svensén, M., Dersin, P., Lee, W.-J., and Ducoffe, M. (2020). Potential, challenges and future directions for deep learning in prognostics and health management applications. Engineering Applications of Artificial Intelligence, 92(103678), 1–15.
Fong, R., and Vedaldi, A. (2017). Interpretable Explanations of Black Boxes by Meaningful Perturbation. Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 3429–3437.
Globerson, T. C. H. S. A., A., & Roweis, S. (2009). An adversarial view of covariate shift and a minimax approach. In S. M. S. A. Quiñonero-Candela J. & N. D. Lawrence (Eds.), Dataset shift in machine learning (pp. 179–197). Cambridge, Massachusetts: The MIT Press.
Glorot, X., Bordes, A., and Bengio. Y. (2011). Deep sparse rectifier neural networks. Proc. of the 14th International Conference on Artificial Intelligence and Statistics, 315–323.
Goodfellow, I. J., Shlens, J., and Szegedy, C. (2015). Explaining and Harnessing Adversarial Examples. Proc. of the International Conference on Learning Representations, 1–11.
Guo, G., Peng, J., Yang, K., Xie, L., and Song, W. (2017). Wheel Tread Defects Inspection Based on SVM. Far East NDT New Technology Application Forum, 251–253.
Guo, Q., Chen, S., Xie, X., Ma, L., Hu, Q., Liu, H., Liu, Y., Zhao, J., and Li, X. (2019). An Empirical Study towards Characterizing Deep Learning Development and Deployment across Different Frameworks and Platforms. Proc. of the IEEE/ACM International Conference on Automated Software Engineering, 810–822.
Han, K., Sun, M., Zhou, X., Zhang, G., Dang, H., and Liu, Z. (2017). A new method in wheel hub surface defect detection: Object detection algorithm based on deep learning. Proc. of the International Conference on Advanced Mechatronic Systems, 335–338.
Hastings, E. M., Sexton, T., Brundage, M. P., and Hodkiewicz, M. (2019). Agreement Behavior of Isolated Annotators for MaintenanceWork-Order Data Mining. Proc. of the Annual Conference of the Prognostics and Health Management Society, 1–7.
He, K., Zhang, X., Ren, S., and Sun, J. (2015). Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37, 1904–1916.
Hendrycks, D., and Dietterich, T. (2019). Benchmarking Neural Network Robustness to Common Corruptions and Perturbations. Proc. of the International Conference on Learning Representations, 1–16.
Hermann, K. L., Chen, T., and Kornblith, S. (2020). The Origins and Prevalence of Texture Bias in Convolutional Neural Networks. Proc. of the 34th Conference on Neural Information Processing Systems, 1–25.
Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. R. (2012). Improving neural networks by preventing co-adaptation of feature detectors. arXiv:1207.0580 [cs.NE], 1–18.
Hyde, P., Ulianov, C., and Defossez, F. (2016). Development and testing of an automatic remote condition monitoring system for train wheels. IET Intelligent Transport Systems, 10(1), 32–40.
Ilyas, A., Santurkar, S., Tsipras, D., Engstrom, L., Tran, B., and Madry, A. (2019). Adversarial Examples Are Not Bugs, They Are Features. Proc. of the 33rd Conference on Neural Information Processing Systems, 1–12.
ISO. (2003). Condition monitoring and diagnostics of machine systems: Data processing, communication and presentation (Tech. Rep. No. 13374-1:2003). International Organization for Standardization.
Jang, R-Y., Lee, R., Park, M.-W., and Lee, S.-H. (2020). Development of an AI Analysis Service System based on OpenFaaS. The Journal of the Korea Contents Association, 20(7), 97–106.
Jo, J., and Bengio, Y. (2017). Measuring the tendency of CNNs to Learn Surface Statistical Regularities. arXiv:1711.11561 [cs.LG], 1–13.
Johnston, C. (2020). Data Lakes. Berkeley, CA, USA: Apress.
Juba, S., and Volkov, A. (2019). Learning PostgreSQL 11: a beginner’s guide to building high-performance PostgreSQL database solutions. Packt Publishing Ltd.
Khan, A., Sohail, A., Zahoora, U., and Qureshi, A. S. (2020). A survey of the recent architectures of deep convolutional neural networks. Artificial Intelligence Review, 1–70.
Kim, B., Kim, H., Kim, K., Kim, S., and Kim, J. (2019). Learning Not to Learn: Training Deep Neural Networks with Biased Data. Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 9012–9020.
Kohonen, T. (1990). The Self-Organizing Map. Proc. of the IEEE, 78(9), 1464–1480.
Kornblith, S., Norouzi, M., Lee, H., and Hinton, G. (2019). Similarity of Neural Network Representations Revisited. Proc. of the International Conference on Machine Learning, 1–20.
Kornblith, S., Shlens, J., and Le, Q. V. (2019). Do Better ImageNet Models Transfer Better? Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2661–2671.
Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). ImageNet Classification with Deep Convolutional Neural Networks. Advances in Neural Information Processing Systems, 25, 1097–1105.
Krummenacher, G., Ong, C. S., Koller, S., Kobayashi, S., and Buhmann, M. (2018). Wheel Defect Detection With Machine Learning. IEEE Transactions on Intelligent Transportation Systems, 19(4), 1176–1187.
Le, H., and Borji, A. (2017). What are the Receptive, Effective Receptive, and Projective Fields of Neurons in Convolutional Neural Networks? arXiv:1705.07049 [cs.CV], 1–7.
LeCun, Y. and Bengio, Y., and Hinton, G. E. (2015). Deep Learning. Nature, 521, 436-444.
Lecun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278–2324.
Lee, K.-H., He, X., Zhang, L., and Yang, L. (2018). Clean-Net: Transfer Learning for Scalable Image Classifier Training With Label Noise. Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 5447–5456.
Ma, K., Vicente, T. F. Y., Samaras, D., Petrucci, M., and Magnus, D. L. (2016). Texture classification for rail surface condition evaluation. Proc. of the IEEE Winter Conference on Applications of Computer Vision, 1–9.
Magel, E., and Kalousek, J. (1996). Identifying and Interpreting Railway Wheel Defects. Proc. of the International Heavy Haul Association Conference on Freight Car Trucks/Bogies, 7–20.
Mahendran, A., and Vedaldi, A. (2015). Understanding Deep Image Representations by Inverting Them. Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 5188–5196.
Masson, ´ E., and Gransart, C. (2017). Cyber Security for Railways - A Huge Challenge - Shift2Rail Perspective. Proc. of the International Workshop on Communication Technologies for Vehicles. Lecture Notes in Computer Science, 10222, 97–104.
Morcos, A. S., Barrett, D. G., Rabinowitz, N. C., and Botvinick, M. (2018). On the importance of single directions for generalization. Proc. of the International Conference on Learning Representations, 1–15.
Moyne, J., Balta, E. C., Kovalenko, I., Faris, J., Barton, K., and Tilbury, D. M. (2020). A Requirements Driven Digital Twin Framework: Specification and Opportunities. IEEE Access, 8, 107781–107801.
Nair, V., and Hinton, G. E. (2010). Rectified linear units improve restricted boltzmann machines. Proc. of the 27th International Conference on Machine Learning, 1–8.
Nguyen, A., Yosinski, J., and Clune, J. (2016). Multifaceted Feature Visualization: Uncovering the Different Types of Features Learned by Each Neuron in Deep Neural Networks. arXiv:1602.03616 [cs.NE], 1–23.
Nikolenko, S. I. (2019). Synthetic Data for Deep Learning. arXiv:1909.11512 [cs.LG], 1–156.
Oquab, M., Bottou, L., Laptev, I., and Sivic, J. (2014). Learning and Transferring Mid-level Image Representations Using Convolutional Neural Networks. Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 1717–1724.
Pham, H., Dai, Z., Xie, Q., Luong, M.-T., and Le, Q. V. (2020). Meta Pseudo Labels. arXiv:2003.10580[cs.LG], 1–22.
Pinkus, A. (1999). Approximation theory of the MLP model in neural networks. Acta Numerica, 8, 143–195.
Pizer, S. M., Amburn, E. P., Austin, J. D., Cromartie, R., Geselowitz, A., Greer, T., Romeny, B. t. H., Zimmerman, J. B., and Zuiderveld, K. (1987). Adaptive Histogram Equalization and Its Variations. Computer Vision, Graphics, and Image Processing, 39, 355–368.
Razavian, A. S., Azizpour, H., Sullivan, J., and Carlsson, S. (2014). CNN Features Off-the-Shelf: An Astounding Baseline for Recognition. Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 512–519.
Real, E., Moore, S., Selle, A., Saxena, S., Suematsu, Y. L., Tan, J., Le, Q. V., and Kurakin, A. (2017). Large-Scale Evolution of Image Classifiers. Proc. of the 34th International Conference on Machine Learning, 70, 2902–2911.
Rezaeianjouybari, B., & Shang, Y. (2020). Deep learning for prognostics and health management: State of the art, challenges, and opportunities. Measurement, 163(107929).
Ricci, M., Kim, J., and Serre, T. (2018). Same different problems strain convolutional neural networks. arXiv:1802.03390 [cs.CV], 1–6.
Salman, H., Ilyas, A., Engstrom, L., Kapoor, A., and Madry, A. (2020). Do Adversarially Robust ImageNet Models Transfer Better? Proc. of the 34th Conference on Neural Information Processing Systems, 1–31.
Sermanet, P., Eigen, D., Zhang, X., Mathieu, M., Fergus, R., and LeCun, Y. (2013). OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks. arXiv:1312.6229 [cs.CV], 1–16.
Settles, B. (2010). Active Learning Literature Survey. University of Wisconsin–Madison - Computer Sciences Technical Report 1648, 1–67.
Shang, L., Yang, Q., Wang, J., Li, S., and Lei, W. (2018). Detection of rail surface defects based on CNN image recognition and classification. Proc. of the International Conference on Advanced Communication Technology, 45–51.
Shojaeefard, M. H., Akbari, M. Tahani, M., and Farhani, F. (2013). Sensitivity Analysis of the Artificial Neural Network Outputs in Friction Stir Lap Joining of Aluminum to Brass. Advances in Materials Science and Engineering, 1–7.
Simard, P. Y., Steinkraus, D., and Platt, J. C. (2003). Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis. Proc. of the Seventh International Conference on Document Analysis and Recognition, 1–6.
Simonyan, K., and Zisserman, A. (2015). Very Deep Convolutional Networks for Large-Scale Image Recognition. Proc. of the International Conference on Learning Representations, 1–14.
Simonyan, K., Vedaldi, A., and Zisserman, A. (2013). Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. arXiv:1312.6034 [cv.CV], 1–8.
Springenberg, J. T., Dosovitskiy, A., Brox, T., and Riedmiller, M. (2015). Striving for Simplicity: The All Convolutional Net. Proc. of the International Conference on Learning Representations, 1–14.
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. (2014). Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15, 1929–1958.
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. (2015). Going deeper with convolutions. Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 1–9.
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. (2016). Rethinking the Inception Architecture for Computer Vision. Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2818–2826.
Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., and Fergus, R. (2013). Intriguing properties of neural networks. arXiv:1312.6199 [cs.CV], 1–10.
Tan, M., and Quoc, V. L. (2019). EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. Proc. of the International Conference on Machine Learning, 1–10.
Tian, J., Azarian, M. H., and Pecht, M. (2014). Anomaly Detection Using Self-Organizing Maps-Based K-Nearest Neighbor Algorithm. Proc. of the European Conference of the Prognostics and Health Management Society, 1–9.
Vickerstaff, A., Bevan, A., and Boyacioglu, P. (2020). Predictive Wheel-rail Management in London Underground: Validation and Verification. Proc. of the Institution of Mechanical Engineers, Part F: Journal of Rail and Rapid Transit, 234(4), 393–404.
Wei, D., Zhou, B., Torralba, A., and Freeman, W. T. (2015). Understanding Intra-Class Knowledge Inside CNN. arXiv:1507.02379 [cs.CV], 1–7.
Wu, C., Haihong, E., and Song, M. (2020). An Automatic Artificial Intelligence Training Platform Based on Kubernetes. Proc. of the 2nd International Conference on Big Data Engineering and Technology, 58–62.
Yang, Y., and Xu, Z. (2020). Rethinking the Value of Labels for Improving Class-Imbalanced Learning. Proc. of the 34th Conference on Neural Information Processing Systems, 1–21.
Yosinski, J., Clune, J., Bengio, Y., and Lipson, H. (2014). How Transferable Are Features in Deep Neural Networks? Proc. of the 27th International Conference on Neural Information Processing Systems, 2, 3320–3328.
Yusuf, S. (2016). Ionic Framework By Example. Packt Publishing Ltd.
Zeiler, M. D., and Fergus, R. (2014). Visualizing and Understanding Convolutional Networks. Proc. of the European Conference on Computer Vision. Lecture Notes in Computer Science, 8689, 818–833.
Zhang, J., Guo, Z., Jiao, T., and Wang, M. (2018). Defect Detection of Aluminum Alloy Wheels in Radiography Images Using Adaptive Threshold and Morphological Reconstruction. Applied Sciences, 8(2365), 1–12.
Zhang, L., Song, J., Gao, A., Chen, J., Bao, J., and Ma, K. (2019). Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation. arXiv:1905.08094 [cs.LG], 1–10.
Zhang, W., Zhang, Y., Li, J., Gao, X., and Wang, L. (2014). The defects recognition of wheel tread based on linear CCD. IEEE Far East Forum on Nondestructive Evaluation/Testing, 302–307.
Zhang, Y., Cui, X., Liu, Y., and Yu, B. (2018). Tire Defects Classification Using Convolution Architecture for Fast Feature Embedding. International Journal of Computational Intelligence Systems, 11, 1056–1066.
Zhao,W., Siegel, D., Lee, J., and Su, L. (2013). An Integrated Framework of Drivetrain Degradation Assessment and Fault Localization for Offshore Wind Turbines. International Journal of Prognostics and Health Management, 1–13.
Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., and Torralba, A. (2015). Object detectors emerge in Deep Scene CNNs. Proc. of the International Conference on Learning Representations, 1–12.
Section
Technical Papers