A Deep Learning Solution for Quality Control in a Die Casting Process
##plugins.themes.bootstrap3.article.main##
##plugins.themes.bootstrap3.article.sidebar##
Anibal Bregon
Carlos J. Alonso-González
Daniel López Miguel A. Martínez-Prieto
Belarmino Pulido
Abstract
Industry 4.0 aims for a digital transformation of manufacturing and production systems, producing what is known as smart factories, where information coming from Cyber-Physical Systems (core elements in Industry 4.0) will be used in all the manufacturing stages to improve productivity. Cyber-physical systems through their control and sensor systems, provide a global view of the process, and generate large amounts of data that can be used for instance to produce datadriven models of the processes. However, having data is not enough, we must be able to store, visualize and analyze them, and to integrate induced knowledge in the whole production process. In this work, we present a solution to automate the quality control process of manufactured parts through image analysis. In particular, we present a Deep Learning solution to detect defects in manufactured parts from thermographic images of a die casting machine at an aluminum foundry.
How to Cite
##plugins.themes.bootstrap3.article.details##
Deep Learning, Quality Control, Industry 4.0
Alfaro-Viquez, D., Zamora-Hernandez, M.-A., Benavent- Lledo, M., Garcia-Rodriguez, J., & Azorın-Lopez, J. (2022). Monitoring human performance through deep learning and computer vision in industry 4.0. In International workshop on soft computing models in industrial and environmental applications (pp. 309–318).
Cai, H., Li, J., Hu, M., Gan, C., & Han, S. (2022). Efficientvit: Multi-scale linear attention for high-resolution dense prediction. arXiv preprint arXiv:2205.14756.
Cumbajin, E., Rodrigues, N., Costa, P., Miragaia, R., Fraz˜ao, L., Costa, N., . . . Pereira, A. (2023). A real-time automated defect detection system for ceramic pieces manufacturing process based on computer vision with deep learning. Sensors, 24(1), 232.
Dalal, N., & Triggs, B. (2005). Histograms of oriented gradients for human detection. In 2005 ieee computer society conference on computer vision and pattern recognition (Vol. 1, p. 886-893).
Deng, L., & Liu, Y. (2018). Deep learning in natural language processing. Springer.
DÅLeniz, O., Bueno, G., Salido, J., & De la Torre, F. (2011). Face recognition using histograms of oriented gradients. Pattern recognition letters, 32(12), 1598-1603.
Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., . . . others (2020). An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929.
Dyrmann, M., Karstoft, H., & Midtiby, H. S. (2016). Plant species classification using deep convolutional neural network. Biosystems engineering, 151, 72-80.
EL Ghadoui, M., Mouchtachi, A., & Majdoul, R. (2023). Intelligent surface roughness measurement using deep learning and computer vision: a promising approach for manufacturing quality control. The International Journal of Advanced Manufacturing Technology, 129(7), 3261–3268.
Esteva, A., Chou, K., Yeung, S., Naik, N., Madani, A., Mottaghi, A., . . . Socher, R. (2021). Deep learning-enabled medical computer vision. NPJ digital medicine, 4(1), 5.
Fang, Y., Sun, Q., Wang, X., Huang, T., Wang, X., & Cao, Y. (2023). Eva-02: A visual representation for neon genesis. arXiv preprint arXiv:2303.11331.
Gonzalez, R. C., & Woods, R. E. (2008). Digital image processing. Pearson Education. He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the ieee conference on computer vision and pattern recognition (p. 770-778).
Hermann, M., Pentek, T., & Otto, B. (2016). Design principles for industrie 4.0 scenarios. In 2016 49th hawaii international conference on system sciences (hicss) (pp. 3928–3937).
Hsu, C.-M., Hsu, C.-C., Hsu, Z.-M., Shih, F.-Y., Chang, M.- L., & Chen, T.-H. (2021). Colorectal polyp image detection and classification through grayscale images and deep learning. Sensors, 21(18), 5995.
Iandola, F. N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W. J., & Keutzer, K. (2016). Squeezenet: Alexnetlevel accuracy with 50x fewer parameters and <0.5mb model size. arXiv:1602.07360. Jack, K. (2011). Video demystified: a handbook for the digital engineer. Elsevier.
Lahmyed, R., El Ansari, M., & Ellahyani, A. (2019). A new thermal infrared and visible spectrum images-based pedestrian detection system. Multimedia Tools and Applications, 78, 15861-15885.
LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recogni tion. Proceedings of the IEEE, 86(11), 2278–2324.
Li, Q., Cai, W., Wang, X., Zhou, Y., Feng, D. D., & Chen, M. (2014). Medical image classification with convolutional neural network. In 2014 13th international conference on control automation robotics & vision (p. 844-848).
Liu, Z., Mao, H., Wu, C.-Y., Feichtenhofer, C., Darrell, T., & Xie, S. (2022). A convnet for the 2020s. In Proceedings of the ieee/cvf conference on computer vision and pattern recognition (p. 11976-11986).
Nassif, A. B., Shahin, I., Attili, I., Azzeh, M., & Shaalan, K. (2019). Speech recognition using deep neural networks: A systematic review. IEEE access, 7, 19143- 19165.
Patel, S., & Jokhakar, V. N. (2016). A random forest based machine learning approach for mild steel defect diagnosis. In 2016 ieee international conference on computational intelligence and computing research (iccic) (p. 1-8).
Peres, R. S., Barata, J., Leitao, P., & Garcia, G. (2019). Multistage quality control using machine learning in the automotive industry. IEEE Access, 7, 79908-79916.
Sachin, R., Sowmya, V., Govind, D., & Soman, K. (2018). Dependency of various color and intensity planes on cnn based image classification. In Advances in signal processing and intelligent recognition systems: Proceedings of third international symposium on signal processing and intelligent recognition systems (p. 167- 177).
Smith, A. D., Du, S., & Kurien, A. (2023). Vision transformers for anomaly detection and localisation in leather surface defect classification based on low-resolution images and a small dataset. Applied Sciences, 13(15), 8716.
Tan, M., & Le, Q. (2019). Efficientnet: Rethinking model scaling for convolutional neural networks. In International conference on machine learning (p. 6105-6114).
Usuga Cadavid, J. P., Lamouri, S., Grabot, B., Pellerin, R., & Fortin, A. (2020). Machine learning applied in production planning and control: a state-of-the-art in the era of industry 4.0. Journal of Intelligent Manufacturing, 31, 1531-1558.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., . . . Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30.
Villalba-Diez, J., Schmidt, D., Gevers, R., Ordieres-Mere, J., Buchwitz, M., & Wellbrock, W. (2019). Deep learning for industrial computer vision quality control in the printing industry 4.0. Sensors, 19(18), 3987.
Wang, K., Zhang, J., Ni, H., & Ren, F. (2021). Thermal defect detection for substation equipment based on infrared image using convolutional neural network. Electronics, 10(16), 1986.
Weimer, D., Scholz-Reiter, B., & Shpitalni, M. (2016). Design of deep convolutional neural network architectures for automated feature extraction in industrial inspection. CIRP annals, 65(1), 417-420.
Xie, S., Girshick, R., Dollar, P., Tu, Z., & He, K. (2017). Aggregated residual transformations for deep neural networks. In Proceedings of the ieee conference on computer vision and pattern recognition (p. 1492-1500).
Xie, Y., & Richmond, D. (2018). Pre-training on grayscale imagenet improves medical image classification. In Proceedings of the european conference on computer vision workshops.
Zhong, R. Y., Xu, X., Klotz, E., & Newman, S. T. (2017). Intelligent manufacturing in the context of industry 4.0: a review. Engineering, 3(5), 616–630.
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
The Prognostic and Health Management Society advocates open-access to scientific data and uses a Creative Commons license for publishing and distributing any papers. A Creative Commons license does not relinquish the author’s copyright; rather it allows them to share some of their rights with any member of the public under certain conditions whilst enjoying full legal protection. By submitting an article to the International Conference of the Prognostics and Health Management Society, the authors agree to be bound by the associated terms and conditions including the following:
As the author, you retain the copyright to your Work. By submitting your Work, you are granting anybody the right to copy, distribute and transmit your Work and to adapt your Work with proper attribution under the terms of the Creative Commons Attribution 3.0 United States license. You assign rights to the Prognostics and Health Management Society to publish and disseminate your Work through electronic and print media if it is accepted for publication. A license note citing the Creative Commons Attribution 3.0 United States License as shown below needs to be placed in the footnote on the first page of the article.
First Author et al. This is an open-access article distributed under the terms of the Creative Commons Attribution 3.0 United States License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.