DC FieldValueLanguage
dc.contributor.authorGessert, Nils Thorben-
dc.contributor.authorGromniak, Martin-
dc.contributor.authorSchlüter, Matthias-
dc.contributor.authorSchlaefer, Alexander-
dc.date.accessioned2019-08-09T11:35:58Z-
dc.date.available2019-08-09T11:35:58Z-
dc.date.issued2019-
dc.identifier.citationProgress in Biomedical Optics and Imaging - Proceedings of SPIE (10951): 1095108 (2019)de_DE
dc.identifier.isbn978-151062549-5de_DE
dc.identifier.issn1605-7422de_DE
dc.identifier.urihttp://hdl.handle.net/11420/3082-
dc.description.abstractAutomatic motion compensation and adjustment of an intraoperative imaging modality's field of view is a common problem during interventions. Optical coherence tomography (OCT) is an imaging modality which is used in interventions due to its high spatial resolution of few micrometers and its temporal resolution of potentially several hundred volumes per second. However, performing motion compensation with OCT is problematic due to its small field of view which might lead to tracked objects being lost quickly. We propose a novel deep learning-based approach that directly learns input parameters of motors that move the scan area for motion compensation from optical coherence tomography volumes. We design a two-path 3D convolutional neural network (CNN) architecture that takes two volumes with an object to be tracked as its input and predicts the necessary motor input parameters to compensate the object's movement. In this way, we learn the calibration between object movement and system parameters for motion compensation with arbitrary objects. Thus, we avoid error-prone hand-eye calibration and handcrafted feature tracking from classical approaches. We achieve an average correlation coefficient of 0:998 between predicted and ground-truth motor parameters which leads to sub-voxel accuracy. Furthermore, we show that our deep learning model is real-time capable for use with the system's high volume acquisition frequency.en
dc.language.isoende_DE
dc.relation.ispartofProgress in Biomedical Optics and Imaging - Proceedings of SPIEde_DE
dc.titleTwo-path 3D CNNs for calibration of system parameters for OCT-based motion compensationde_DE
dc.typeinProceedingsde_DE
dc.type.dinicontributionToPeriodical-
dcterms.DCMITypeText-
tuhh.abstract.englishAutomatic motion compensation and adjustment of an intraoperative imaging modality's field of view is a common problem during interventions. Optical coherence tomography (OCT) is an imaging modality which is used in interventions due to its high spatial resolution of few micrometers and its temporal resolution of potentially several hundred volumes per second. However, performing motion compensation with OCT is problematic due to its small field of view which might lead to tracked objects being lost quickly. We propose a novel deep learning-based approach that directly learns input parameters of motors that move the scan area for motion compensation from optical coherence tomography volumes. We design a two-path 3D convolutional neural network (CNN) architecture that takes two volumes with an object to be tracked as its input and predicts the necessary motor input parameters to compensate the object's movement. In this way, we learn the calibration between object movement and system parameters for motion compensation with arbitrary objects. Thus, we avoid error-prone hand-eye calibration and handcrafted feature tracking from classical approaches. We achieve an average correlation coefficient of 0:998 between predicted and ground-truth motor parameters which leads to sub-voxel accuracy. Furthermore, we show that our deep learning model is real-time capable for use with the system's high volume acquisition frequency.de_DE
tuhh.publisher.doi10.1117/12.2512823-
tuhh.publication.instituteMedizintechnische Systeme E-1de_DE
tuhh.type.opusInProceedings (Aufsatz / Paper einer Konferenz etc.)-
tuhh.institute.germanMedizintechnische Systeme E-1de
tuhh.institute.englishMedizintechnische Systeme E-1de_DE
tuhh.gvk.hasppnfalse-
dc.type.drivercontributionToPeriodical-
dc.type.casraiConference Paper-
tuhh.container.volume10951de_DE
tuhh.container.articlenumber1095108de_DE
item.languageiso639-1en-
item.fulltextNo Fulltext-
item.openairetypeinProceedings-
item.grantfulltextnone-
item.creatorOrcidGessert, Nils Thorben-
item.creatorOrcidGromniak, Martin-
item.creatorOrcidSchlüter, Matthias-
item.creatorOrcidSchlaefer, Alexander-
item.openairecristypehttp://purl.org/coar/resource_type/c_5794-
item.creatorGNDGessert, Nils Thorben-
item.creatorGNDGromniak, Martin-
item.creatorGNDSchlüter, Matthias-
item.creatorGNDSchlaefer, Alexander-
item.cerifentitytypePublications-
crisitem.author.deptMedizintechnische Systeme E-1-
crisitem.author.deptMedizintechnische Systeme E-1-
crisitem.author.deptMedizintechnische Systeme E-1-
crisitem.author.deptMedizintechnische Systeme E-1-
crisitem.author.orcid0000-0001-6325-5092-
crisitem.author.orcid0000-0002-2019-1102-
crisitem.author.parentorgStudiendekanat Elektrotechnik, Informatik und Mathematik-
crisitem.author.parentorgStudiendekanat Elektrotechnik, Informatik und Mathematik-
crisitem.author.parentorgStudiendekanat Elektrotechnik, Informatik und Mathematik-
crisitem.author.parentorgStudiendekanat Elektrotechnik, Informatik und Mathematik-
Appears in Collections:Publications without fulltext
Show simple item record

Page view(s)

101
Last Week
7
Last month
0
checked on Oct 20, 2020

Google ScholarTM

Check

Add Files to Item

Note about this record

Export

Items in TORE are protected by copyright, with all rights reserved, unless otherwise indicated.