TY - GEN
T1 - Consolidating Attention Features for Multi-view Image Editing
AU - Patashnik, Or
AU - Gal, Rinon
AU - Cohen-Or, Daniel
AU - Zhu, Jun Yan
AU - de la Torre, Fernando
N1 - Publisher Copyright:
© 2024 Copyright held by the owner/author(s).
PY - 2024/12/3
Y1 - 2024/12/3
N2 - Large-scale text-to-image models enable a wide range of image editing techniques, using text prompts or even spatial controls. However, applying these editing methods to multi-view images depicting a single scene leads to 3D-inconsistent results. In this work, we focus on spatial control-based geometric manipulations and introduce a method to consolidate the editing process across various views. We build on two insights: (1) maintaining consistent features throughout the generative process helps attain consistency in multi-view editing, and (2) the queries in self-attention layers significantly influence the image structure. Hence, we propose to improve the geometric consistency of the edited images by enforcing the consistency of the queries. To do so, we introduce QNeRF, a neural radiance field trained on the internal query features of the edited images. Once trained, QNeRF can render 3D-consistent queries, which are then softly injected back into the self-attention layers during generation, greatly improving multi-view consistency. We refine the process through a progressive, iterative method that better consolidates queries across the diffusion timesteps. We compare our method to a range of existing techniques and demonstrate that it can achieve better multi-view consistency and higher fidelity to the input scene. These advantages allow us to train NeRFs with fewer visual artifacts, that are better aligned with the target geometry.
AB - Large-scale text-to-image models enable a wide range of image editing techniques, using text prompts or even spatial controls. However, applying these editing methods to multi-view images depicting a single scene leads to 3D-inconsistent results. In this work, we focus on spatial control-based geometric manipulations and introduce a method to consolidate the editing process across various views. We build on two insights: (1) maintaining consistent features throughout the generative process helps attain consistency in multi-view editing, and (2) the queries in self-attention layers significantly influence the image structure. Hence, we propose to improve the geometric consistency of the edited images by enforcing the consistency of the queries. To do so, we introduce QNeRF, a neural radiance field trained on the internal query features of the edited images. Once trained, QNeRF can render 3D-consistent queries, which are then softly injected back into the self-attention layers during generation, greatly improving multi-view consistency. We refine the process through a progressive, iterative method that better consolidates queries across the diffusion timesteps. We compare our method to a range of existing techniques and demonstrate that it can achieve better multi-view consistency and higher fidelity to the input scene. These advantages allow us to train NeRFs with fewer visual artifacts, that are better aligned with the target geometry.
KW - Diffusion Models
KW - Image Editing
KW - Multi-view
UR - http://www.scopus.com/inward/record.url?scp=85217232812&partnerID=8YFLogxK
U2 - 10.1145/3680528.3687611
DO - 10.1145/3680528.3687611
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???
AN - SCOPUS:85217232812
T3 - Proceedings - SIGGRAPH Asia 2024 Conference Papers, SA 2024
BT - Proceedings - SIGGRAPH Asia 2024 Conference Papers, SA 2024
A2 - Spencer, Stephen N.
PB - Association for Computing Machinery, Inc
T2 - 2024 SIGGRAPH Asia 2024 Conference Papers, SA 2024
Y2 - 3 December 2024 through 6 December 2024
ER -