StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators

Rinon Gal, Or Patashnik, Haggai Maron, Amit H. Bermano, Gal Chechik, Daniel Cohen-Or

Research output: Contribution to journalArticlepeer-review

166 Scopus citations

Abstract

Can a generative model be trained to produce images from a specific domain, guided only by a text prompt, without seeing any image? In other words: can an image generator be trained "blindly"? Leveraging the semantic power of large scale Contrastive-Language-Image-Pre-training (CLIP) models, we present a text-driven method that allows shifting a generative model to new domains, without having to collect even a single image. We show that through natural language prompts and a few minutes of training, our method can adapt a generator across a multitude of domains characterized by diverse styles and shapes. Notably, many of these modifications would be difficult or infeasible to reach with existing methods. We conduct an extensive set of experiments across a wide range of domains. These demonstrate the effectiveness of our approach, and show that our models preserve the latent-space structure that makes generative models appealing for downstream tasks. Code and videos available at: stylegan-nada.github.io/

Original languageEnglish
Article number3530164
JournalACM Transactions on Graphics
Volume41
Issue number4
DOIs
StatePublished - 22 Jul 2022

Funding

FundersFunder number
Deutsch Foundation
Yandex Initiative in Machine Learning
Blavatnik Family Foundation
United States-Israel Binational Science Foundation2020280
Israel Science Foundation2492/20, 3441/21

    Keywords

    • Generator domain adaptation
    • Text-guided content generation
    • Zero-shot training

    Fingerprint

    Dive into the research topics of 'StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators'. Together they form a unique fingerprint.

    Cite this