RGB×D: Learning depth-weighted RGB patches for RGB-D indoor semantic segmentation

Jinming Cao, Hanchao Leng, Daniel Cohen-Or, Dani Lischinski, Ying Chen, Changhe Tu, Yangyan Li

Research output: Contribution to journalArticlepeer-review

Abstract

Significant advances have been made in designing CNNs for RGB semantic segmentation. However, these CNNs are not widely adopted for RGB-D segmentation, due to the asymmetry between the RGB and depth modalities. Instead, dedicated architectures are designed to fuse them for effective RGB-D segmentation, wherein complex structures are often employed, resulting in much increased computational cost. In this paper, we propose a novel way to learn the fusion of RGB and depth information in an early stage. This enables our method to easily adopt existing RGB segmentation networks with minimal modification. Our method is simple yet effective to build a bridge between RGB and RGBD semantic segmentation, so as to avoid designing a far more complex network structure for RGBD segmentation. The proposed method treats RGB and depth information in an inherently asymmetric manner, and to the best of our knowledge, this is the first approach that learns to fuse them in a multiplicative manner for RGB-D segmentation; thus, we call it RGB×D. Extensive experiments and ablation studies on the challenging NYUDv2, SUN RGB-D and Cityscapes semantic segmentation benchmarks show that the proposed RGB×D offers a consistent improvement over several baselines.

Original languageEnglish
Pages (from-to)568-580
Number of pages13
JournalNeurocomputing
Volume462
DOIs
StatePublished - 28 Oct 2021

Keywords

  • Deep learning
  • Depth information
  • RGB-D indoor semantic segmentation

Fingerprint

Dive into the research topics of 'RGB×D: Learning depth-weighted RGB patches for RGB-D indoor semantic segmentation'. Together they form a unique fingerprint.

Cite this