Hierarchical patch VAE-GAN: Generating diverse videos from a single sample

Shir Gur*, Sagie Benaim*, Lior Wolf

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

29 Scopus citations

Abstract

We consider the task of generating diverse and novel videos from a single video sample. Recently, new hierarchical patch-GAN based approaches were proposed for generating diverse images, given only a single sample at training time. Moving to videos, these approaches fail to generate diverse samples, and often collapse into generating samples similar to the training video. We introduce a novel patch-based variational autoencoder (VAE) which allows for a much greater diversity in generation. Using this tool, a new hierarchical video generation scheme is constructed: at coarse scales, our patch-VAE is employed, ensuring samples are of high diversity. Subsequently, at finer scales, a patch-GAN renders the fine details, resulting in high quality videos. Our experiments show that the proposed method produces diverse samples in both the image domain, and the more challenging video domain. Our code and supplementary material (SM) with additional samples are available at https://shirgur.github.io/hp-vae-gan.

Original languageEnglish
JournalAdvances in Neural Information Processing Systems
Volume2020-December
StatePublished - 2020
Event34th Conference on Neural Information Processing Systems, NeurIPS 2020 - Virtual, Online
Duration: 6 Dec 202012 Dec 2020

Funding

FundersFunder number
European Commission
Horizon 2020ERC CoG 725974

    Fingerprint

    Dive into the research topics of 'Hierarchical patch VAE-GAN: Generating diverse videos from a single sample'. Together they form a unique fingerprint.

    Cite this