TUHH Open Research
Help
  • Log In
    New user? Click here to register.Have you forgotten your password?
  • English
  • Deutsch
  • Communities & Collections
  • Publications
  • Research Data
  • People
  • Institutions
  • Projects
  • Statistics
  1. Home
  2. TUHH
  3. Publication References
  4. Monocular Fisheye Depth Estimation for UAV Applications with Segmentation Feature Integration
 
Options

Monocular Fisheye Depth Estimation for UAV Applications with Segmentation Feature Integration

Publikationstyp
Conference Paper
Date Issued
2024-11-15
Sprache
English
Author(s)
Jaisawal, Pravin Kumar 
Lufttransportsysteme M-28  
Papakonstantinou, Stephanos  
Lufttransportsysteme M-28  
Gollnick, Volker  
Lufttransportsysteme M-28  
TORE-URI
https://hdl.handle.net/11420/52703
Citation
43rd AIAA DATC/IEEE Digital Avionics Systems Conference, DASC 2024
Contribution to Conference
43rd AIAA DATC/IEEE Digital Avionics Systems Conference, DASC 2024  
Publisher DOI
10.1109/DASC62030.2024.10748676
Scopus ID
2-s2.0-85211229421
Publisher
IEEE
ISBN
9798350349610
Scene understanding is crucial for a UAV to carry out operations autonomously. Given the limited memory and computational resources available on the UAV platform, running several deep learning networks simultaneously may not be feasible. In such scenarios, combining related architectures, such as depth and segmentation networks, could help not only to reduce the memory footprint but also increase the inference speed. One novel application addressed in this paper is the usage of fisheye cameras, which are particularly beneficial for UAVs because of their large field-of-view coverage compared to normal perspective cameras and are also lighter compared to sensors such as RADAR and LiDAR. This paper proposes a joint architecture for combining a monocular depth estimation network with a segmentation network for fisheye camera images. Specifically, we focus on integrating segmentation features into the decoder of the depth estimation network to improve depth estimation predictions by designing a lightweight fusion module, which uses 1 × 1 convolution and a CBAM module to refine the fused feature map. Furthermore, we show the effectiveness of this joint architecture in the AirFisheye dataset. The source code and the pre-trained model are available at https://github.com/pravinjaisawal/Adabins-SFI.
Subjects
Depth estimation | Feature integration | Fisheye | Joint architectures | UAV
MLE@TUHH
DDC Class
629.13: Aviation Engineering
TUHH
Weiterführende Links
  • Contact
  • Send Feedback
  • Cookie settings
  • Privacy policy
  • Impress
DSpace Software

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science
Design by effective webwork GmbH

  • Deutsche NationalbibliothekDeutsche Nationalbibliothek
  • ORCiD Member OrganizationORCiD Member Organization
  • DataCiteDataCite
  • Re3DataRe3Data
  • OpenDOAROpenDOAR
  • OpenAireOpenAire
  • BASE Bielefeld Academic Search EngineBASE Bielefeld Academic Search Engine
Feedback