MISS GAN: A Multi-IlluStrator style generative adversarial network for image to illustration translation

Noa Barzilay, Tal Berkovitz Shalev, Raja Giryes

Research output: Contribution to journalArticlepeer-review

Abstract

Unsupervised style transfer that supports diverse input styles using only one trained generator is a challenging and interesting task in computer vision. This paper proposes a Multi-IlluStrator Style Generative Adversarial Network (MISS GAN) that is a multi-style framework for unsupervised image-to-illustration translation, which can generate styled yet content preserving images. The illustrations dataset is a challenging one since it is comprised of illustrations of seven different illustrators, hence contains diverse styles. Existing methods require to train several generators (as the number of illustrators) to handle the different illustrators’ styles, which limits their practical usage, or require to train an image specific network, which ignores the style information provided in other images of the illustrator. MISS GAN is both input image specific and uses the information of other images using only one trained model.

Original languageEnglish
Pages (from-to)140-147
Number of pages8
JournalPattern Recognition Letters
Volume151
DOIs
StatePublished - Nov 2021

Keywords

  • Generative adversarial networks
  • Illustration
  • Image to image translation
  • Multi style transfer

Fingerprint

Dive into the research topics of 'MISS GAN: A Multi-IlluStrator style generative adversarial network for image to illustration translation'. Together they form a unique fingerprint.

Cite this