Real-time Progressive 3D Semantic Segmentation for Indoor Scenes

Quang-Hieu Pham1   Binh-Son Hua2   Duc Thanh Nguyen3   Sai-Kit Yeung4  

1Singapore University of Technology and Design   2The University of Tokyo   3Deadkin University  
4Hong Kong University of Science and Technology

IEEE Winter Conference on Applications of Computer Vision (WACV), 2019.

Abstract

The widespread adoption of autonomous systems such as drones and assistant robots has created a need for real-time high-quality semantic scene segmentation. In this paper, we propose an efficient yet robust technique for on-the-fly dense reconstruction and semantic segmentation of 3D indoor scenes. To guarantee (near) real-time performance, our method is built atop an efficient super-voxel clustering method and a conditional random field with higher-order constraints from structural and object cues, enabling progressive dense semantic segmentation without any precomputation. We extensively evaluate our method on different indoor scenes including kitchens, offices, and bedrooms in the SceneNN and ScanNet datasets and show that our technique consistently produces state-of-the-art segmentation results in both qualitative and quantitative experiments.

Video

Citation

@inproceedings{pham2019proseg,
  title = {Real-time progressive 3{D} semantic segmentation of indoor scenes},
  author = {Pham, Quang-Hieu and Hua, Binh-Son and Nguyen, Duc Thanh and Yeung, Sai-Kit},
  booktitle = {IEEE Winter Conference on Applications of Computer Vision (WACV)},
  year = 2019
}

Acknowledgements

This research project is partially supported by an internal grant from HKUST (R9429).