Zero-Shot Voice Conditioning for Denoising Diffusion TTS Models

Alon Levkovitch, Eliya Nachmani, Lior Wolf

Research output: Contribution to journalConference articlepeer-review

Abstract

We present a novel way of conditioning a pretrained denoising diffusion speech model to produce speech in the voice of a novel person unseen during training. The method requires a short (∼ 3 seconds) sample from the target person, and generation is steered at inference time, without any training steps. At the heart of the method lies a sampling process that combines the estimation of the denoising model with a low-pass version of the new speaker's sample. The objective and subjective evaluations show that our sampling method can generate a voice similar to that of the target speaker in terms of frequency, with an accuracy comparable to state-of-the-art methods, and without training.

Original languageEnglish
Pages (from-to)2983-2987
Number of pages5
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Volume2022-September
DOIs
StatePublished - 2022
Event23rd Annual Conference of the International Speech Communication Association, INTERSPEECH 2022 - Incheon, Korea, Republic of
Duration: 18 Sep 202222 Sep 2022

Fingerprint

Dive into the research topics of 'Zero-Shot Voice Conditioning for Denoising Diffusion TTS Models'. Together they form a unique fingerprint.

Cite this