Co-occurrence based texture synthesis

Anna Darzi, Itai Lang, Ashutosh Taklikar, Hadar Averbuch-Elor*, Shai Avidan

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

5 Scopus citations

Abstract

As image generation techniques mature, there is a growing interest in explainable representations that are easy to understand and intuitive to manipulate. In this work, we turn to co-occurrence statistics, which have long been used for texture analysis, to learn a controllable texture synthesis model. We propose a fully convolutional generative adversarial network, conditioned locally on co-occurrence statistics, to generate arbitrarily large images while having local, interpretable control over texture appearance. To encourage fidelity to the input condition, we introduce a novel differentiable co-occurrence loss that is integrated seamlessly into our framework in an end-to-end fashion. We demonstrate that our solution offers a stable, intuitive, and interpretable latent representation for texture synthesis, which can be used to generate smooth texture morphs between different textures. We further show an interactive texture tool that allows a user to adjust local characteristics of the synthesized texture by directly using the co-occurrence values. [Figure not available: see fulltext.]

Original languageEnglish
Pages (from-to)289-302
Number of pages14
JournalComputational Visual Media
Volume8
Issue number2
DOIs
StatePublished - Jun 2022

Keywords

  • co-occurrence
  • deep learning
  • generative adversarial networks (GANs)
  • texture synthesis

Fingerprint

Dive into the research topics of 'Co-occurrence based texture synthesis'. Together they form a unique fingerprint.

Cite this