Abstract

Within the aviation industry, considerable interest exists in minimizing possible maintenance expenses. In particular, the examination of critical components such as aircraft engines is of significant relevance. Currently, many inspection processes are still performed manually using hand-held endoscopes to detect coating damages in confined spaces and therefore require a high level of individual expertise. Particularly due to the often poorly illuminated video data, these manual inspections are susceptible to uncertainties. This motivates an automated defect detection to provide defined and comparable results and also enable significant cost savings. For such a hand-held application with video data of poor quality, small and fast Convolutional Neural Networks (CNNs) for the segmentation of coating damages are suitable and further examined in this work. Due to high efforts required in image annotation and a significant lack of broadly divergent image data (domain gap), only few expressive annotated images are available. This necessitates extensive training methods to utilize unsupervised domains and further exploit the sparsely annotated data. We propose novel training methods, which implement Generative Adversarial Networks (GAN) to improve the training of segmentation networks by optimizing weights and generating synthetic annotated RGB image data for further training procedures. For this, small individual encoder and decoder structures are designed to resemble the implemented structures of the GANs. This enables an exchange of weights and optimizer states from the GANs to the segmentation networks, which improves both convergence certainty and accuracy in training. The usage of unsupervised domains in training with the GANs leads to a better generalization of the networks and tackles the challenges caused by the domain gap. Furthermore, a test series is presented that demonstrates the impact of these methods compared to standard supervised training and transfer learning methods based on common datasets. Finally, the developed CNNs are compared to larger state-of-the-art segmentation networks in terms of feed-forward computational time, accuracy and training duration.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call