Options
Monocular Fisheye Depth Estimation for UAV Applications with Segmentation Feature Integration
Publikationstyp
Conference Paper
Date Issued
2024-11-15
Sprache
English
Author(s)
Citation
43rd AIAA DATC/IEEE Digital Avionics Systems Conference (DASC 2024)
Contribution to Conference
Publisher DOI
Scopus ID
Publisher
IEEE
ISBN
9798350349610
Scene understanding is crucial for a UAV to carry out operations autonomously. Given the limited memory and computational resources available on the UAV platform, running several deep learning networks simultaneously may not be feasible. In such scenarios, combining related architectures, such as depth and segmentation networks, could help not only to reduce the memory footprint but also increase the inference speed. One novel application addressed in this paper is the usage of fisheye cameras, which are particularly beneficial for UAVs because of their large field-of-view coverage compared to normal perspective cameras and are also lighter compared to sensors such as RADAR and LiDAR. This paper proposes a joint architecture for combining a monocular depth estimation network with a segmentation network for fisheye camera images. Specifically, we focus on integrating segmentation features into the decoder of the depth estimation network to improve depth estimation predictions by designing a lightweight fusion module, which uses 1 × 1 convolution and a CBAM module to refine the fused feature map. Furthermore, we show the effectiveness of this joint architecture in the AirFisheye dataset. The source code and the pre-trained model are available at https://github.com/pravinjaisawal/Adabins-SFI.
Subjects
Depth estimation | Feature integration | Fisheye | Joint architectures | UAV
DDC Class
629.13: Aviation Engineering