TY - JOUR
T1 - StyleFusion
T2 - Disentangling Spatial Segments in StyleGAN-Generated Images
AU - Kafri, Omer
AU - Patashnik, Or
AU - Alaluf, Yuval
AU - Cohen-Or, Daniel
N1 - Publisher Copyright:
© 2022 Association for Computing Machinery.
PY - 2022/10/26
Y1 - 2022/10/26
N2 - We present StyleFusion, a new mapping architecture for StyleGAN, which takes as input a number of latent codes and fuses them into a single style code. Inserting the resulting style code into a pre-trained StyleGAN generator results in a single harmonized image in which each semantic region is controlled by one of the input latent codes. Effectively, StyleFusion yields a disentangled representation of the image, providing fine-grained control over each region of the generated image. Moreover, to help facilitate global control over the generated image, a special input latent code is incorporated into the fused representation. StyleFusion operates in a hierarchical manner, where each level is tasked with learning to disentangle a pair of image regions (e.g., the car body and wheels). The resulting learned disentanglement allows one to modify both local, fine-grained semantics (e.g., facial features) as well as more global features (e.g., pose and background), providing improved flexibility in the synthesis process. As a natural extension, StyleFusion allows one to perform semantically-aware cross-image mixing of regions that are not necessarily aligned. Finally, we demonstrate how StyleFusion can be paired with existing editing techniques to more faithfully constrain the edit to the user’s region of interest. Code is available at: https://github.com/OmerKafri/StyleFusion.
AB - We present StyleFusion, a new mapping architecture for StyleGAN, which takes as input a number of latent codes and fuses them into a single style code. Inserting the resulting style code into a pre-trained StyleGAN generator results in a single harmonized image in which each semantic region is controlled by one of the input latent codes. Effectively, StyleFusion yields a disentangled representation of the image, providing fine-grained control over each region of the generated image. Moreover, to help facilitate global control over the generated image, a special input latent code is incorporated into the fused representation. StyleFusion operates in a hierarchical manner, where each level is tasked with learning to disentangle a pair of image regions (e.g., the car body and wheels). The resulting learned disentanglement allows one to modify both local, fine-grained semantics (e.g., facial features) as well as more global features (e.g., pose and background), providing improved flexibility in the synthesis process. As a natural extension, StyleFusion allows one to perform semantically-aware cross-image mixing of regions that are not necessarily aligned. Finally, we demonstrate how StyleFusion can be paired with existing editing techniques to more faithfully constrain the edit to the user’s region of interest. Code is available at: https://github.com/OmerKafri/StyleFusion.
KW - Generative Adversarial Network
KW - disentangled representation
KW - image generation
UR - http://www.scopus.com/inward/record.url?scp=85141273726&partnerID=8YFLogxK
U2 - 10.1145/3527168
DO - 10.1145/3527168
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:85141273726
SN - 0730-0301
VL - 41
JO - ACM Transactions on Graphics
JF - ACM Transactions on Graphics
IS - 5
M1 - 179
ER -