TTS skins: Speaker conversion via ASR

Adam Polyak*, Lior Wolf, Yaniv Taigman

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

Abstract

We present a fully convolutional wav-to-wav network for converting between speakers' voices, without relying on text. Our network is based on an encoder-decoder architecture, where the encoder is pre-trained for the task of Automatic Speech Recognition, and a multi-speaker waveform decoder is trained to reconstruct the original signal in an autoregressive manner. We train the network on narrated audiobooks, and demonstrate multi-voice TTS in those voices, by converting the voice of a TTS robot.

Original languageEnglish
Pages (from-to)786-790
Number of pages5
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Volume2020-October
DOIs
StatePublished - 2020
Event21st Annual Conference of the International Speech Communication Association, INTERSPEECH 2020 - Shanghai, China
Duration: 25 Oct 202029 Oct 2020

Keywords

  • Human-computer interaction
  • Text to speech
  • Voice conversion

Fingerprint

Dive into the research topics of 'TTS skins: Speaker conversion via ASR'. Together they form a unique fingerprint.

Cite this