Options
UADA3D: Unsupervised Adversarial Domain Adaptation for 3D Object Detection With Sparse LiDAR and Large Domain Gaps
Citation Link: https://doi.org/10.15480/882.13686
Publikationstyp
Journal Article
Date Issued
2024-10-29
Sprache
English
Author(s)
TORE-DOI
Volume
9
Issue
12
Start Page
11210
End Page
11217
Citation
IEEE Robotics and Automation Letters 9 (12): 11210-11217 (2024)
Publisher DOI
Scopus ID
Publisher
IEEE
In this study, we address a gap in existing unsupervised domain adaptation approaches on LiDAR-based 3D object detection, which have predominantly concentrated on adapting between established, high-density autonomous driving datasets. We focus on sparser point clouds, capturing scenarios from different perspectives: not just from vehicles on the road but also from mobile robots on sidewalks, which encounter significantly different environmental conditions and sensor configurations. We introduce Unsupervised Adversarial Domain Adaptation for 3D Object Detection (UADA3D). UADA3D does not depend on pre-trained source models or teacher-student architectures. Instead, it uses an adversarial approach to directly learn domain-invariant features. We demonstrate its efficacy in various adaptation scenarios, showing significant improvements in both self-driving car and mobile robot domains.
Subjects
Deep learning for visual perception
object detection
segmentation and categorization
DDC Class
620: Engineering
600: Technology
Publication version
publishedVersion
Loading...
Name
UADA3D_Unsupervised_Adversarial_Domain_Adaptation_for_3D_Object_Detection_With_Sparse_LiDAR_and_Large_Domain_Gaps.pdf
Size
3.35 MB
Format
Adobe PDF