Automating Visual Inspection with Convolutional Neural Networks
##plugins.themes.bootstrap3.article.main##
##plugins.themes.bootstrap3.article.sidebar##
Abstract
Convolutional Neural Networks (CNNs) have become the recent tool of choice for many visual detection tasks, including object classification, localization, detection, and segmentation. CNNs are specialized neural networks composed of many layers and specifically designed to analyze grid-like data, e.g. images. One of the key features of a CNN is its ability to automatically detect important features within an image (e.g. edges, patterns, shapes); prior to CNNs, these features had to be manually engineered by subject matter experts.
Inspired by the significant achievements and success that CNNs have experienced in the domain of computer vision, we examine a specific convolutional neural network (CNN) architecture, U-Net, suited for the task of visual defect detection. We identify and discuss situations for the use of this architecture in the specific context of external defect detection on aircraft and experimentally discuss its performance across a dataset of common visual defects.
One requirement of training Convolution Networks on an image analysis task is the need for a large image (training) data set. We address this problem by using synthetically generated images from computer models of jets with varying angles and perspectives with and without induced faults in the generated images. This paper presents the initial results of using CNNs, specifically U-Net, to detect aerial vehicle surface defects of three categories. We further demonstrate that CNNs trained on synthetic images can then be used to detect faults in real images of jets with visual damages. The results obtained in this research, indicate that our approach has been quite effective in detecting surface anomalies in our tests.
How to Cite
##plugins.themes.bootstrap3.article.details##
Convolution Neural Networks, Defect Detection, Semantic segmentation, U-Net
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
The Prognostic and Health Management Society advocates open-access to scientific data and uses a Creative Commons license for publishing and distributing any papers. A Creative Commons license does not relinquish the author’s copyright; rather it allows them to share some of their rights with any member of the public under certain conditions whilst enjoying full legal protection. By submitting an article to the International Conference of the Prognostics and Health Management Society, the authors agree to be bound by the associated terms and conditions including the following:
As the author, you retain the copyright to your Work. By submitting your Work, you are granting anybody the right to copy, distribute and transmit your Work and to adapt your Work with proper attribution under the terms of the Creative Commons Attribution 3.0 United States license. You assign rights to the Prognostics and Health Management Society to publish and disseminate your Work through electronic and print media if it is accepted for publication. A license note citing the Creative Commons Attribution 3.0 United States License as shown below needs to be placed in the footnote on the first page of the article.
First Author et al. This is an open-access article distributed under the terms of the Creative Commons Attribution 3.0 United States License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.