TUHH Open Research
Help
  • Log In
    New user? Click here to register.Have you forgotten your password?
  • English
  • Deutsch
  • Communities & Collections
  • Publications
  • Research Data
  • People
  • Institutions
  • Projects
  • Statistics
  1. Home
  2. TUHH
  3. Publications
  4. Adapting synthetic training data in deep learning-based visual surface inspection to improve transferability of simulations to real-world environments
 
Options

Adapting synthetic training data in deep learning-based visual surface inspection to improve transferability of simulations to real-world environments

Citation Link: https://doi.org/10.15480/882.14811
Publikationstyp
Conference Paper
Date Issued
2023
Sprache
English
Author(s)
Schmedemann, Ole  orcid-logo
Flugzeug-Produktionstechnik M-23  
Schlodinski, Simon  
Flugzeug-Produktionstechnik M-23  
Holst, Dirk  orcid-logo
Flugzeug-Produktionstechnik M-23  
Schüppstuhl, Thorsten  orcid-logo
Flugzeug-Produktionstechnik M-23  
TORE-DOI
10.15480/882.14811
TORE-URI
https://hdl.handle.net/11420/54432
First published in
Proceedings of SPIE  
Number in series
12623
Article Number
1262307
Citation
Proceedings of SPIE - The International Society for Optical Engineering 12623: 1262307 (2023)
Contribution to Conference
Automated Visual Inspection and Machine Vision V  
Publisher DOI
10.1117/12.2673857
Scopus ID
2-s2.0-85173552049
Publisher
The International Society for Optical Engineering
ISBN
9781510664555
Learning models from synthetic image data rendered from 3D models and applying them to real-world applications can reduce costs and improve performance when using deep learning for image processing in automated visual inspection tasks. However, sufficient generalisation from synthetic to real-world data is challenging, because synthetic samples only approximate the inherent structure of real-world images and lack image properties present in real-world data, a phenomenon called domain gap. In this work, we propose to combine synthetic generation approaches with CycleGAN, a style transfer method based on Generative Adversarial Networks (GANs). CycleGAN learns the inherent structure from real-world samples and adapts the synthetic data accordingly. We investigate how synthetic data can be adapted for a use case of visual inspection of automotive cast iron parts, and show that supervised deep object detectors trained on the adapted data can successfully generalise to real-world data and outperform object detectors trained on synthetic data alone. This demonstrates that generative domain adaptation helps to leverage synthetic data in deep learning-assisted inspection systems for automated visual inspection tasks.
Subjects
automated visual inspection | borescope | deep learning | domain adaptation | industrial endoscope inspection | non-destructive testing | surface inspection | synthetic training data
DDC Class
006.3: Artificial Intelligence
620.11: Engineering Materials
Publication version
publishedVersion
Lizenz
http://rightsstatements.org/vocab/InC/1.0/
Publisher‘s Creditline
Copyright 2003 Society of Photo‑Optical Instrumentation Engineers (SPIE). One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this publication for a fee or for commercial purposes, and modification of the contents of the publication are prohibited.
Loading...
Thumbnail Image
Name

2023_SPIE_Schmedemann.pdf

Size

5.91 MB

Format

Adobe PDF

TUHH
Weiterführende Links
  • Contact
  • Send Feedback
  • Cookie settings
  • Privacy policy
  • Impress
DSpace Software

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science
Design by effective webwork GmbH

  • Deutsche NationalbibliothekDeutsche Nationalbibliothek
  • ORCiD Member OrganizationORCiD Member Organization
  • DataCiteDataCite
  • Re3DataRe3Data
  • OpenDOAROpenDOAR
  • OpenAireOpenAire
  • BASE Bielefeld Academic Search EngineBASE Bielefeld Academic Search Engine
Feedback