Concentration bounds for unigram language models

Evgeny Drukh*, Yishay Mansour

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

We show several high-probability concentration bounds for learning unigram language models. One interesting quantity is the probability of all words appearing exactly k times in a sample of size m. A standard estimator for this quantity is the Good-Turing estimator. The existing analysis on its error shows a high-probability bound of approximately O (k/√m). We improve its dependency on k to O (4√k/√m + k/m). We also analyze the empirical frequencies estimator, showing that with high probability its error is bounded by approximately O (1/k + √k/m). We derive a combined estimator, which has an error of approximately O (m-2/5), for any k. A standard measure for the quality of a learning algorithm is its expected per-word log-loss. The leave-one-out method can be used for estimating the log-loss of the unigram model. We show that its error has a high-probability bound of approximately O (1/√m), for any underlying distribution. We also bound the log-loss a priori, as a function of various parameters of the distribution.

Original languageEnglish
JournalJournal of Machine Learning Research
Volume6
StatePublished - 2005

Keywords

  • Chernoff bounds
  • Good-Turing estimators
  • Leave-one-out estimation
  • Logarithmic loss

Fingerprint

Dive into the research topics of 'Concentration bounds for unigram language models'. Together they form a unique fingerprint.

Cite this