Modeling intra-speaker variability for speaker recognition

Hagai Aronowitz*, Dror Irony, David Burshtein

*Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review


In this paper we present a speaker recognition algorithm that models explicitly intra-speaker inter-session variability. Such variability may be caused by changing speaker characteristics (mood, fatigue, etc.), channel variability or noise variability. We define a session-space in which each session (either train or test session) is a vector. We then calculate a rotation of the session-space for which the estimated intra-speaker subspace is isolated and can be modeled explicitly. We evaluated our technique on the NIST-2004 speaker recognition evaluation corpus, and compared it to a GMM baseline system. Results indicate significant reduction in error rate.

Original languageEnglish
Number of pages4
StatePublished - 2005
Event9th European Conference on Speech Communication and Technology - Lisbon, Portugal
Duration: 4 Sep 20058 Sep 2005


Conference9th European Conference on Speech Communication and Technology


Dive into the research topics of 'Modeling intra-speaker variability for speaker recognition'. Together they form a unique fingerprint.

Cite this