Deep geometric texture synthesis

Amir Hertz, Rana Hanocka, Raja Giryes, Daniel Cohen-Or

Research output: Contribution to journalArticlepeer-review


Recently, deep generative adversarial networks for image generation have advanced rapidly; yet, only a small amount of research has focused on generative models for irregular structures, particularly meshes. Nonetheless, mesh generation and synthesis remains a fundamental topic in computer graphics. In this work, we propose a novel framework for synthesizing geometric textures. It learns geometric texture statistics from local neighborhoods (i.e., local triangular patches) of a single reference 3D model. It learns deep features on the faces of the input triangulation, which is used to subdivide and generate offsets across multiple scales, without parameterization of the reference or target mesh. Our network displaces mesh vertices in any direction (i.e., in the normal and tangential direction), enabling synthesis of geometric textures, which cannot be expressed by a simple 2D displacement map. Learning and synthesizing on local geometric patches enables a genus-oblivious framework, facilitating texture transfer between shapes of different genus.

Original languageEnglish
Article number108
JournalACM Transactions on Graphics
Issue number4
StatePublished - 8 Jul 2020


  • geometric deep learning
  • shape analysis
  • surface reconstruction


Dive into the research topics of 'Deep geometric texture synthesis'. Together they form a unique fingerprint.

Cite this