End to end lip synchronization with a temporal autoencoder

Yoav Shalev, Lior Wolf

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

We study the problem of syncing the lip movement in a video with the audio stream. Our solution finds an optimal alignment using a dual-domain recurrent neural network that is trained on synthetic data we generate by dropping and duplicating video frames. Once the alignment is found, we modify the video in order to sync the two sources. Our method is shown to greatly outperform the literature methods on a variety of existing and new benchmarks. As an application, we demonstrate our ability to robustly align text-to-speech generated audio with an existing video stream. Our code is attached as supplementary.

Original languageEnglish
Title of host publicationProceedings - 2020 IEEE Winter Conference on Applications of Computer Vision, WACV 2020
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages330-339
Number of pages10
ISBN (Electronic)9781728165530
DOIs
StatePublished - Mar 2020
Event2020 IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2020 - Snowmass Village, United States
Duration: 1 Mar 20205 Mar 2020

Publication series

NameProceedings - 2020 IEEE Winter Conference on Applications of Computer Vision, WACV 2020

Conference

Conference2020 IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2020
Country/TerritoryUnited States
CitySnowmass Village
Period1/03/205/03/20

Fingerprint

Dive into the research topics of 'End to end lip synchronization with a temporal autoencoder'. Together they form a unique fingerprint.

Cite this