Self-Sampling for Neural Point Cloud Consolidation

Gal Metzer, Rana Hanocka, Raja Giryes, Daniel Cohen-Or

Research output: Contribution to journalArticlepeer-review


We introduce a novel technique for neural point cloud consolidation which learns from only the input point cloud. Unlike other point up-sampling methods which analyze shapes via local patches, in this work, we learn from global subsets. We repeatedly self-sample the input point cloud with global subsets that are used to train a deep neural network. Specifically, we define source and target subsets according to the desired consolidation criteria (e.g., generating sharp points or points in sparse regions). The network learns a mapping from source to target subsets, and implicitly learns to consolidate the point cloud. During inference, the network is fed with random subsets of points from the input, which it displaces to synthesize a consolidated point set. We leverage the inductive bias of neural networks to eliminate noise and outliers, a notoriously difficult problem in point cloud consolidation. The shared weights of the network are optimized over the entire shape, learning non-local statistics and exploiting the recurrence of local-scale geometries. Specifically, the network encodes the distribution of the underlying shape surface within a fixed set of local kernels, which results in the best explanation of the underlying shape surface. We demonstrate the ability to consolidate point sets from a variety of shapes, while eliminating outliers and noise.

Original languageEnglish
Article number3470645
Number of pages14
JournalACM Transactions on Graphics
Issue number5
StatePublished - Oct 2021


  • Geometric deep learning
  • point clouds
  • surface reconstruction


Dive into the research topics of 'Self-Sampling for Neural Point Cloud Consolidation'. Together they form a unique fingerprint.

Cite this