LCD: Learned Cross-domain Descriptors for 2D-3D Matching
Gemma Roig5 Sai-Kit Yeung6
4Deadkin University 5Geothe University of Frankfrut 6Hong Kong University of Science and Technology
Overview
Abstract
In this work, we present a novel method to learn a local cross-domain descriptor for 2D image and 3D point cloud matching. Our proposed method is a dual auto-encoder neural network that maps 2D and 3D input into a shared latent space representation. We show that such local cross-domain descriptors in the shared embedding are more discriminative than those obtained from individual training in 2D and 3D domains. To facilitate the training process, we built a new dataset by collecting ≈1.4 millions of 2D-3D correspondences with various lighting conditions and settings from publicly available RGB-D scenes. Our descriptor is evaluated in three main experiments: 2D-3D matching, cross-domain retrieval, and sparse-to-dense depth estimation. Experimental results confirm the robustness of our approach as well as its competitive performance not only in solving cross-domain tasks but also in being able to generalize to solve sole 2D and 3D tasks.
Citation
@inproceedings{pham2020lcd,
title = {{LCD}: {L}earned cross-domain descriptors for 2{D}-3{D} matching},
author = {Pham, Quang-Hieu and Uy, Mikaela Angelina and Hua, Binh-Son and Nguyen, Duc Thanh and Roig, Gemma and Yeung, Sai-Kit},
booktitle = {AAAI Conference on Artificial Intelligence (AAAI)},
year = 2020
}
Acknowledgements
This research project is partially supported by an internal grant from HKUST (R9429).