Options
Adapting synthetic training data in deep learning-based visual surface inspection to improve transferability of simulations to real-world environments
Publikationstyp
Conference Paper
Publikationsdatum
2023-06-28
Sprache
English
Citation
Automated Visual Inspection and Machine Vision V (2023)
Contribution to Conference
Publisher DOI
Scopus ID
Learning models from synthetic image data rendered from 3D models and applying them to real-world applications can reduce costs and improve performance when using deep learning for image processing in automated visual inspection tasks. However, sufficient generalization from synthetic to real-world data is challenging, because synthetic samples only approximate the inherent structure of real-world images and lack image properties present in real-world data, a phenomenon called domain gap. In this work, we propose to combine synthetic generation approaches with CycleGAN, a style transfer method based on Generative Adversarial Networks (GANs). CycleGAN learns the inherent structure from real-world samples and adapts the synthetic data accordingly. We investigate how synthetic data can be adapted for a use case of visual inspection of automotive cast iron parts and show that supervised deep object detectors trained on the adapted data can successfully generalize to real-world data and outperform object detectors trained on synthetic data alone. This demonstrates that generative domain adaptation helps to leverage synthetic data in deep learning-assisted inspection systems for automated visual inspection tasks.
Schlagworte
automated visual inspection
borescope
deep learning
domain adaptation
industrial endoscope inspection
non-destructive testing
surface inspection
synthetic training data
DDC Class
624: Civil Engineering, Environmental Engineering