3D attention-driven depth acquisition for object identification

Kai Xu, Yifei Shi, Lintao Zheng, Junyu Zhang, Min Liu, Hui Huang, Hao Su, Daniel Cohen-Or, Baoquan Chen

Research output: Contribution to journalArticlepeer-review

Abstract

We address the problem of autonomously exploring unknown objects in a scene by consecutive depth acquisitions. The goal is to reconstruct the scene while online identifying the objects from among a large collection of 3D shapes. Fine-grained shape identification demands a meticulous series of observations attending to varying views and parts of the object of interest. Inspired by the recent success of attention-based models for 2D recognition, we develop a 3D Attention Model that selects the best views to scan from, as well as the most informative regions in each view to focus on, to achieve efficient object recognition. The region-level attention leads to focus-driven features which are quite robust against object occlusion. The attention model, trained with the 3D shape collection, encodes the temporal dependencies among consecutive views with deep recurrent networks. This facilitates order-aware view planning accounting for robot movement cost. In achieving instance identification, the shape collection is organized into a hierarchy, associated with pre-trained hierarchical classifiers. The effectiveness of our method is demonstrated on an autonomous robot (PR) that explores a scene and identifies the objects to construct a 3D scene model.

Original languageEnglish
Article number238
JournalACM Transactions on Graphics
Volume35
Issue number6
DOIs
StatePublished - Nov 2016

Keywords

  • 3D acquisition
  • Attention-based model
  • Depth camera
  • Next-best-view
  • Object identification
  • Shape classification

Fingerprint

Dive into the research topics of '3D attention-driven depth acquisition for object identification'. Together they form a unique fingerprint.

Cite this