TUHH Open Research
Help
  • Log In
    New user? Click here to register.Have you forgotten your password?
  • English
  • Deutsch
  • Communities & Collections
  • Publications
  • Research Data
  • People
  • Institutions
  • Projects
  • Statistics
  1. Home
  2. TUHH
  3. Publication References
  4. Two-path 3D CNNs for calibration of system parameters for OCT-based motion compensation
 
Options

Two-path 3D CNNs for calibration of system parameters for OCT-based motion compensation

Publikationstyp
Conference Paper
Date Issued
2019
Sprache
English
Author(s)
Gessert, Nils Thorben  
Gromniak, Martin  
Schlüter, Matthias  
Schlaefer, Alexander  
Institut
Medizintechnische Systeme E-1  
TORE-URI
http://hdl.handle.net/11420/3082
First published in
Progress in Biomedical Optics and Imaging - Proceedings of SPIE  
Number in series
10951
Article Number
1095108
Citation
Progress in Biomedical Optics and Imaging - Proceedings of SPIE (10951): 1095108 (2019)
Contribution to Conference
SPIE Medical Imaging, 2019  
Publisher DOI
10.1117/12.2512823
Scopus ID
2-s2.0-85068911626
Automatic motion compensation and adjustment of an intraoperative imaging modality's field of view is a common problem during interventions. Optical coherence tomography (OCT) is an imaging modality which is used in interventions due to its high spatial resolution of few micrometers and its temporal resolution of potentially several hundred volumes per second. However, performing motion compensation with OCT is problematic due to its small field of view which might lead to tracked objects being lost quickly. We propose a novel deep learning-based approach that directly learns input parameters of motors that move the scan area for motion compensation from optical coherence tomography volumes. We design a two-path 3D convolutional neural network (CNN) architecture that takes two volumes with an object to be tracked as its input and predicts the necessary motor input parameters to compensate the object's movement. In this way, we learn the calibration between object movement and system parameters for motion compensation with arbitrary objects. Thus, we avoid error-prone hand-eye calibration and handcrafted feature tracking from classical approaches. We achieve an average correlation coefficient of 0:998 between predicted and ground-truth motor parameters which leads to sub-voxel accuracy. Furthermore, we show that our deep learning model is real-time capable for use with the system's high volume acquisition frequency.
TUHH
Weiterführende Links
  • Contact
  • Send Feedback
  • Cookie settings
  • Privacy policy
  • Impress
DSpace Software

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science
Design by effective webwork GmbH

  • Deutsche NationalbibliothekDeutsche Nationalbibliothek
  • ORCiD Member OrganizationORCiD Member Organization
  • DataCiteDataCite
  • Re3DataRe3Data
  • OpenDOAROpenDOAR
  • OpenAireOpenAire
  • BASE Bielefeld Academic Search EngineBASE Bielefeld Academic Search Engine
Feedback