TY - JOUR
T1 - StyleGAN-NADA
T2 - CLIP-Guided Domain Adaptation of Image Generators
AU - Gal, Rinon
AU - Patashnik, Or
AU - Maron, Haggai
AU - Bermano, Amit H.
AU - Chechik, Gal
AU - Cohen-Or, Daniel
N1 - Publisher Copyright:
© 2022 ACM.
PY - 2022/7/22
Y1 - 2022/7/22
N2 - Can a generative model be trained to produce images from a specific domain, guided only by a text prompt, without seeing any image? In other words: can an image generator be trained "blindly"? Leveraging the semantic power of large scale Contrastive-Language-Image-Pre-training (CLIP) models, we present a text-driven method that allows shifting a generative model to new domains, without having to collect even a single image. We show that through natural language prompts and a few minutes of training, our method can adapt a generator across a multitude of domains characterized by diverse styles and shapes. Notably, many of these modifications would be difficult or infeasible to reach with existing methods. We conduct an extensive set of experiments across a wide range of domains. These demonstrate the effectiveness of our approach, and show that our models preserve the latent-space structure that makes generative models appealing for downstream tasks. Code and videos available at: stylegan-nada.github.io/
AB - Can a generative model be trained to produce images from a specific domain, guided only by a text prompt, without seeing any image? In other words: can an image generator be trained "blindly"? Leveraging the semantic power of large scale Contrastive-Language-Image-Pre-training (CLIP) models, we present a text-driven method that allows shifting a generative model to new domains, without having to collect even a single image. We show that through natural language prompts and a few minutes of training, our method can adapt a generator across a multitude of domains characterized by diverse styles and shapes. Notably, many of these modifications would be difficult or infeasible to reach with existing methods. We conduct an extensive set of experiments across a wide range of domains. These demonstrate the effectiveness of our approach, and show that our models preserve the latent-space structure that makes generative models appealing for downstream tasks. Code and videos available at: stylegan-nada.github.io/
KW - Generator domain adaptation
KW - Text-guided content generation
KW - Zero-shot training
UR - http://www.scopus.com/inward/record.url?scp=85135005966&partnerID=8YFLogxK
U2 - 10.1145/3528223.3530164
DO - 10.1145/3528223.3530164
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:85135005966
SN - 0730-0301
VL - 41
JO - ACM Transactions on Graphics
JF - ACM Transactions on Graphics
IS - 4
M1 - 3530164
ER -