Domain Expansion of Image Generators

Yotam Nitzan, Michaël Gharbi, Richard Zhang, Taesung Park, Jun Yan Zhu, Daniel Cohen-Or, Eli Shechtman

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review


Can one inject new concepts into an already trained generative model, while respecting its existing structure and knowledge? We propose a new task - domain expansion - to address this. Given a pretrained generator and novel (but related) domains, we expand the generator to jointly model all domains, old and new, harmoniously. First, we note the generator contains a meaningful, pretrained latent space. Is it possible to minimally perturb this hard-earned representation, while maximally representing the new domains? Interestingly, we find that the latent space offers unused, 'dormant' directions, which do not affect the output. This provides an opportunity: By 'repurposing' these directions, we can represent new domains without perturbing the original representation. In fact, we find that pretrained generators have the capacity to add several- even hundreds - of new domains! Using our expansion method, one 'expanded' model can supersede numerous domain-specific models, without expanding the model size. Additionally, a single expanded generator natively supports smooth transitions between domains, as well as composition of domains. Code and project page available here.

Original languageEnglish
Title of host publicationProceedings - 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023
PublisherIEEE Computer Society
Number of pages10
ISBN (Electronic)9798350301298
StatePublished - 2023
Event2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023 - Vancouver, Canada
Duration: 18 Jun 202322 Jun 2023

Publication series

NameProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
ISSN (Print)1063-6919


Conference2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023


FundersFunder number
Blavatnik Family Foundation
Israel Science Foundation2492/20, 3441/21
Tel Aviv University


    • Deep learning architectures and techniques


    Dive into the research topics of 'Domain Expansion of Image Generators'. Together they form a unique fingerprint.

    Cite this