Monkey See, Monkey Do: Harnessing Self-attention in Motion Diffusion for Zero-shot Motion Transfer

Sigal Raab, Inbar Gat, Nathan Sala, Guy Tevet, Rotem Shalev-Arkushin, Ohad Fried, Amit Haim Bermano, Daniel Cohen-Or

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations

Abstract

Given the remarkable results of motion synthesis with diffusion models, a natural question arises: how can we effectively leverage these models for motion editing? Existing diffusion-based motion editing methods overlook the profound potential of the prior embedded within the weights of pre-trained models, which enables manipulating the latent feature space; hence, they primarily center on handling the motion space. In this work, we explore the attention mechanism of pre-trained motion diffusion models. We uncover the roles and interactions of attention elements in capturing and representing intricate human motion patterns, and carefully integrate these elements to transfer a leader motion to a follower one while maintaining the nuanced characteristics of the follower, resulting in zero-shot motion transfer. Manipulating features associated with selected motions allows us to confront a challenge observed in prior motion diffusion approaches, which use general directives (e.g., text, music) for editing, ultimately failing to convey subtle nuances effectively. Our work is inspired by the phrase Monkey See, Monkey Do, relating to human mimicry. Our technique enables accomplishing tasks such as synthesizing out-of-distribution motions, style transfer, and spatial editing. Furthermore, diffusion inversion is seldom employed for motions; as a result, editing efforts focus on generated motions, limiting the editability of real ones. MoMo harnesses motion inversion, extending its application to both real and generated motions. Experimental results show the advantage of our approach over the current art. In particular, unlike methods tailored for specific applications through training, our approach is applied at inference time, requiring no training. Webpage: https://monkeyseedocg.github.io.

Original languageEnglish
Title of host publicationProceedings - SIGGRAPH Asia 2024 Conference Papers, SA 2024
EditorsStephen N. Spencer
PublisherAssociation for Computing Machinery, Inc
ISBN (Electronic)9798400711312
DOIs
StatePublished - 3 Dec 2024
Event2024 SIGGRAPH Asia 2024 Conference Papers, SA 2024 - Tokyo, Japan
Duration: 3 Dec 20246 Dec 2024

Publication series

NameProceedings - SIGGRAPH Asia 2024 Conference Papers, SA 2024

Conference

Conference2024 SIGGRAPH Asia 2024 Conference Papers, SA 2024
Country/TerritoryJapan
CityTokyo
Period3/12/246/12/24

Funding

FundersFunder number
Tel Aviv University
Blavatnik Family Foundation
Israel Science Foundation2492/20, 3441/21

    Keywords

    • Animation
    • Computer Graphics
    • Deep Features
    • Human motion
    • Motion synthesis

    Fingerprint

    Dive into the research topics of 'Monkey See, Monkey Do: Harnessing Self-attention in Motion Diffusion for Zero-shot Motion Transfer'. Together they form a unique fingerprint.

    Cite this