PointGMM: A neural GMM network for point clouds

Amir Hertz, Rana Hanocka, Raja Giryes, Daniel Cohen-Or

Research output: Contribution to journalConference articlepeer-review

49 Scopus citations

Abstract

Point clouds are a popular representation for 3D shapes. However, they encode a particular sampling without accounting for shape priors or non-local information. We advocate for the use of a hierarchical Gaussian mixture model (hGMM), which is a compact, adaptive and lightweight representation that probabilistically defines the underlying 3D surface. We present PointGMM, a neural network that learns to generate hGMMs which are characteristic of the shape class, and also coincide with the input point cloud. PointGMM is trained over a collection of shapes to learn a class-specific prior. The hierarchical representation has two main advantages: (i) coarse-to-fine learning, which avoids converging to poor local-minima; and (ii) (an unsupervised) consistent partitioning of the input shape. We show that as a generative model, PointGMM learns a meaningful latent space which enables generating consistent interpolations between existing shapes, as well as synthesizing novel shapes. We also present a novel framework for rigid registration using PointGMM, that learns to disentangle orientation from structure of an input shape.

Original languageEnglish
Article number9156692
Pages (from-to)12051-12060
Number of pages10
JournalProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
DOIs
StatePublished - 2020
Event2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020 - Virtual, Online, United States
Duration: 14 Jun 202019 Jun 2020

Funding

FundersFunder number
NSF-BSF2017729
Horizon 2020 Framework Programme757497
European Research Council

    Fingerprint

    Dive into the research topics of 'PointGMM: A neural GMM network for point clouds'. Together they form a unique fingerprint.

    Cite this