The power of amnesia: Learning probabilistic automata with variable memory length

Dana Ron*, Yoram Singer, Naftali Tishby

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review


We propose and analyze a distribution learning algorithm for variable memory length Markov processes. These processes can be described by a subclass of probabilistic finite automata which we name Probabilistic Suffix Automata (PSA). Though hardness results are known for learning distributions generated by general probabilistic automata, we prove that the algorithm we present can efficiently learn distributions generated by PSAs. In particular, we show that for any target PSA, the KL-divergence between the distribution generated by the target and the distribution generated by the hypothesis the learning algorithm outputs, can be made small with high confidence in polynomial time and sample complexity. The learning algorithm is motivated by applications in human-machine interaction. Here we present two applications of the algorithm. In the first one we apply the algorithm in order to construct a model of the English language, and use this model to correct corrupted text. In the second application we construct a simple stochastic model for E.coli DNA.

Original languageEnglish
Pages (from-to)117-149
Number of pages33
JournalMachine Learning
Issue number2-3
StatePublished - 1996
Externally publishedYes


  • Learning distributions
  • Markov models
  • Probabilistic automata
  • Suffix trees
  • Text correction


Dive into the research topics of 'The power of amnesia: Learning probabilistic automata with variable memory length'. Together they form a unique fingerprint.

Cite this