TY - GEN
T1 - Cross-Image Attention for Zero-Shot Appearance Transfer
AU - Alaluf, Yuval
AU - Garibi, Daniel
AU - Patashnik, Or
AU - Averbuch-Elor, Hadar
AU - Cohen-Or, Daniel
N1 - Publisher Copyright:
© 2024 ACM.
PY - 2024/7/13
Y1 - 2024/7/13
N2 - Recent advancements in text-to-image generative models have demonstrated a remarkable ability to capture a deep semantic understanding of images. In this work, we leverage this semantic knowledge to transfer the visual appearance between objects that share similar semantics but may differ significantly in shape. To achieve this, we build upon the self-attention layers of these generative models and introduce a cross-image attention mechanism that implicitly establishes semantic correspondences across images. Specifically, given a pair of images - one depicting the target structure and the other specifying the desired appearance - our cross-image attention combines the queries corresponding to the structure image with the keys and values of the appearance image. This operation, when applied during the denoising process, leverages the established semantic correspondences to generate an image combining the desired structure and appearance. In addition, to improve the output image quality, we harness three mechanisms that either manipulate the noisy latent codes or the model's internal representations throughout the denoising process. Importantly, our approach is zero-shot, requiring no optimization or training. Experiments show that our method is effective across a wide range of object categories and is robust to variations in shape, size, and viewpoint between the two input images.
AB - Recent advancements in text-to-image generative models have demonstrated a remarkable ability to capture a deep semantic understanding of images. In this work, we leverage this semantic knowledge to transfer the visual appearance between objects that share similar semantics but may differ significantly in shape. To achieve this, we build upon the self-attention layers of these generative models and introduce a cross-image attention mechanism that implicitly establishes semantic correspondences across images. Specifically, given a pair of images - one depicting the target structure and the other specifying the desired appearance - our cross-image attention combines the queries corresponding to the structure image with the keys and values of the appearance image. This operation, when applied during the denoising process, leverages the established semantic correspondences to generate an image combining the desired structure and appearance. In addition, to improve the output image quality, we harness three mechanisms that either manipulate the noisy latent codes or the model's internal representations throughout the denoising process. Importantly, our approach is zero-shot, requiring no optimization or training. Experiments show that our method is effective across a wide range of object categories and is robust to variations in shape, size, and viewpoint between the two input images.
KW - Appearance Transfer
KW - Diffusion Models
KW - Image Editing
UR - http://www.scopus.com/inward/record.url?scp=85198295896&partnerID=8YFLogxK
U2 - 10.1145/3641519.3657423
DO - 10.1145/3641519.3657423
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???
AN - SCOPUS:85198295896
T3 - Proceedings - SIGGRAPH 2024 Conference Papers
BT - Proceedings - SIGGRAPH 2024 Conference Papers
A2 - Spencer, Stephen N.
PB - Association for Computing Machinery, Inc
T2 - SIGGRAPH 2024 Conference Papers
Y2 - 28 July 2024 through 1 August 2024
ER -