Support vector machine training for improved hidden Markov modeling

Alba Sloin*, David Burshtein

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

95 Scopus citations

Abstract

We present a discriminative training algorithm, that uses support vector machines (SVMs), to improve the classification of discrete and continuous output probability hidden Markov models (HMMs). The algorithm uses a set of maximum-likelihood (ML) trained HMM models as a baseline system, and an SVM training scheme to rescore the results of the baseline HMMs. It turns out that the rescoring model can be represented as an unnormalized HMM. We describe two algorithms for training the unnormalized HMM models for both the discrete and continuous cases. One of the algorithms results in a single set of unnormalized HMMs that can be used in the standard recognition procedure (the Viterbi recognizer), as if they were plain HMMs. We use a toy problem and an isolated noisy digit recognition task to compare our new method to standard ML training. Our experiments show that SVM rescoring of hidden Markov models typically reduces the error rate significantly compared to standard ML training.

Original languageEnglish
Pages (from-to)172-188
Number of pages17
JournalIEEE Transactions on Signal Processing
Volume56
Issue number1
DOIs
StatePublished - Jan 2008

Funding

FundersFunder number
EC 6th Framework IST
Israeli Ministry of industry and trade
Yitzhak and Chaya Weinstein Research Institute for Signal Processing at Tel-Aviv University

    Keywords

    • Discriminative training
    • Hidden Markov model (HMM)
    • Speech recognition
    • Support vector machine (SVM)

    Fingerprint

    Dive into the research topics of 'Support vector machine training for improved hidden Markov modeling'. Together they form a unique fingerprint.

    Cite this