TY - GEN
T1 - I know that voice
T2 - 8th IAPR International Conference on Biometrics, ICB 2015
AU - Uzan, Lior
AU - Wolf, Lior
N1 - Publisher Copyright:
© 2015 IEEE.
PY - 2015/6/29
Y1 - 2015/6/29
N2 - Intentional voice modifications by electronic or nonelectronic means challenge automatic speaker recognition systems. Previous work focused on detecting the act of disguise or identifying everyday speakers disguising their voices. Here, we propose a benchmark for the study of voice disguise, by studying the voice variability of professional voice actors. A dataset of 114 actors playing 647 characters is created. It contains 19 hours of captured speech, divided into 29,733 utterances tagged by character and actor names, which is then further sampled. Text-independent speaker identification of the actors based on a novel benchmark training on a subset of the characters they play, while testing on new unseen characters, shows an EER of 17.1%, HTER of 15.9%, and rank-1 recognition rate of 63.5% per utterance when training a Convolutional Neural Network on spectrograms generated from the utterances. An I-Vector based system was trained and tested on the same data, resulting in 39.7% EER, 39.4% HTER, and rank-1 recognition rate of 13.6%.
AB - Intentional voice modifications by electronic or nonelectronic means challenge automatic speaker recognition systems. Previous work focused on detecting the act of disguise or identifying everyday speakers disguising their voices. Here, we propose a benchmark for the study of voice disguise, by studying the voice variability of professional voice actors. A dataset of 114 actors playing 647 characters is created. It contains 19 hours of captured speech, divided into 29,733 utterances tagged by character and actor names, which is then further sampled. Text-independent speaker identification of the actors based on a novel benchmark training on a subset of the characters they play, while testing on new unseen characters, shows an EER of 17.1%, HTER of 15.9%, and rank-1 recognition rate of 63.5% per utterance when training a Convolutional Neural Network on spectrograms generated from the utterances. An I-Vector based system was trained and tested on the same data, resulting in 39.7% EER, 39.4% HTER, and rank-1 recognition rate of 13.6%.
UR - http://www.scopus.com/inward/record.url?scp=84943329044&partnerID=8YFLogxK
U2 - 10.1109/ICB.2015.7139074
DO - 10.1109/ICB.2015.7139074
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???
AN - SCOPUS:84943329044
T3 - Proceedings of 2015 International Conference on Biometrics, ICB 2015
SP - 46
EP - 51
BT - Proceedings of 2015 International Conference on Biometrics, ICB 2015
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 19 May 2015 through 22 May 2015
ER -