Global-to-local generative model for 3D shapes

Hao Wang, Nadav Schor, Ruizhen Hu, Haibin Huang, Daniel Cohen-Or, Hui Huang

Research output: Contribution to journalConference articlepeer-review

Abstract

We introduce a generative model for 3D man-made shapes. The presented method takes a global-to-local (G2L) approach. An adversarial network (GAN) is built first to construct the overall structure of the shape, segmented and labeled into parts. A novel conditional auto-encoder (AE) is then augmented to act as a part-level refiner. The GAN, associated with additional local discriminators and quality losses, synthesizes a voxel-based model, and assigns the voxels with part labels that are represented in separate channels. The AE is trained to amend the initial synthesis of the parts, yielding more plausible part geometries. We also introduce new means to measure and evaluate the performance of an adversarial generative model. We demonstrate that our global-to-local generative model produces significantly better results than a plain three-dimensional GAN, in terms of both their shape variety and the distribution with respect to the training data.

Original languageEnglish
Article number214
Number of pages10
JournalACM Transactions on Graphics
Volume37
Issue number6
DOIs
StatePublished - 4 Dec 2018
Externally publishedYes
EventSIGGRAPH Asia 2018 Technical Papers - International Conference on Computer Graphics and Interactive Techniques, SIGGRAPH Asia 2018 - Tokyo, Japan
Duration: 4 Dec 20187 Dec 2018

Keywords

  • Generative adversarial networks
  • Global-to-local
  • Part refiner
  • Semantic segmentation
  • Shape modeling

Fingerprint

Dive into the research topics of 'Global-to-local generative model for 3D shapes'. Together they form a unique fingerprint.

Cite this