Figure 1. Comparison between cross-domain adaptation (Left) and our DualCross (Right) on Day to Night scenario.
Abstract
Closing the domain gap between training and deployment and incorporating multiple sensor modalities are two challenging yet critical topics for self-driving. Existing work only focuses on single one of the above topics, overlooking the simultaneous domain- and modality-shift which pervasively exists in real-world scenarios. A model trained with multi-sensor data collected in Europe may need to run in Asia with a subset of input sensors available. In this work, we propose DualCross, a cross-modality cross-domain adaptation framework to facilitate the learning of a more robust monocular bird's-eye-view (BEV) perception model, which transfers the point clouds knowledge from a LiDAR sensor in one domain during the training phase to the camera-only testing scenario in a different domain. This work results in the first open analysis of cross-domain cross-sensor perception and adaptation for monocular 3D tasks in the wild. We benchmark our approach on large-scale datasets under a wide range of domain shifts and show state-of-the-art results against various baselines.Motivation
Left & Middle: Existing models assume fixed sensor or modality during training and testing phases. Right: A more realistic setting considers both cross-modality and cross-domain shift. We propose DualCross to reduce the domain and modality discrepancy, and achieve state-of-the-art performance.Model
DualCross includes three components. (1) LiDAR-Teacher uses voxelized LiDAR point clouds to transform the image features to BEV frame. It provides essential knowledge on how to guide image learning given LiDAR information. (2) Camera-Student is supervised by teacher model as well as the LiDAR ground truth. (3) Discriminators are used to align features from source and target domains.For more details please refer to our paper.
Paper
Yunze Man,
Liang-Yan Gui, and
Yu-Xiong Wang
DualCross: Cross-Modality Cross-Domain Adaptation for Monocular BEV Perception
Preprint, [Project] [Paper] [Code]
DualCross: Cross-Modality Cross-Domain Adaptation for Monocular BEV Perception
Preprint, [Project] [Paper] [Code]
Citation
@inproceedings{man2023dualcross,author = {Man, Yunze and Gui, Liang-Yan and Wang, Yu-Xiong},
booktitle = {IROS},
title = {{DualCross: Cross-Modality Cross-Domain Adaptation for Monocular BEV Perception}},
year = {2023}
}