Schmedemann, OleOleSchmedemannSchlodinski, SimonSimonSchlodinskiHolst, DirkDirkHolstSchüppstuhl, ThorstenThorstenSchüppstuhl2023-09-052023-09-052023-06-28Automated Visual Inspection and Machine Vision V (2023)https://hdl.handle.net/11420/43159Learning models from synthetic image data rendered from 3D models and applying them to real-world applications can reduce costs and improve performance when using deep learning for image processing in automated visual inspection tasks. However, sufficient generalization from synthetic to real-world data is challenging, because synthetic samples only approximate the inherent structure of real-world images and lack image properties present in real-world data, a phenomenon called domain gap. In this work, we propose to combine synthetic generation approaches with CycleGAN, a style transfer method based on Generative Adversarial Networks (GANs). CycleGAN learns the inherent structure from real-world samples and adapts the synthetic data accordingly. We investigate how synthetic data can be adapted for a use case of visual inspection of automotive cast iron parts and show that supervised deep object detectors trained on the adapted data can successfully generalize to real-world data and outperform object detectors trained on synthetic data alone. This demonstrates that generative domain adaptation helps to leverage synthetic data in deep learning-assisted inspection systems for automated visual inspection tasks.enautomated visual inspectionborescopedeep learningdomain adaptationindustrial endoscope inspectionnon-destructive testingsurface inspectionsynthetic training dataCivil Engineering, Environmental EngineeringAdapting synthetic training data in deep learning-based visual surface inspection to improve transferability of simulations to real-world environmentsConference Paper10.1117/12.2673857Conference Paper