Unsupervised Generation of Free-Form and Parameterized Avatars

Adam Polyak*, Yaniv Taigman, Lior Wolf

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review


We study two problems involving the task of mapping images between different domains. The first problem, transfers an image in one domain to an analog image in another domain. The second problem, extends the previous one by mapping an input image to a tied pair, consisting of a vector of parameters and an image that is created using a graphical engine from this vector of parameters. Similar to the first problem, the mapping's objective is to have the output image as similar as possible to the input image. In both cases, no supervision is given during training in the form of matching inputs and outputs. We compare the two unsupervised learning problems to the problem of unsupervised domain adaptation, define generalization bounds that are based on discrepancy, and employ a GAN to implement network solutions that correspond to these bounds. Experimentally, our methods are shown to solve the problem of automatically creating avatars.

Original languageEnglish
Article number8425579
Pages (from-to)444-459
Number of pages16
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Issue number2
StatePublished - 1 Feb 2020


  • Deep learning
  • analysis by synthesis
  • cross-domain transfer
  • domain adaptation
  • domain transfer network
  • neural network
  • tied output synthesis


Dive into the research topics of 'Unsupervised Generation of Free-Form and Parameterized Avatars'. Together they form a unique fingerprint.

Cite this