## Abstract

We propose and analyze a distribution learning algorithm for variable memory length Markov processes. These processes can be described by a subclass of probabilistic finite automata which we name Probabilistic Suffix Automata (PSA). Though hardness results are known for learning distributions generated by general probabilistic automata, we prove that the algorithm we present can efficiently learn distributions generated by PSAs. In particular, we show that for any target PSA, the KL-divergence between the distribution generated by the target and the distribution generated by the hypothesis the learning algorithm outputs, can be made small with high confidence in polynomial time and sample complexity. The learning algorithm is motivated by applications in human-machine interaction. Here we present two applications of the algorithm. In the first one we apply the algorithm in order to construct a model of the English language, and use this model to correct corrupted text. In the second application we construct a simple stochastic model for E.coli DNA.

Original language | English |
---|---|

Pages (from-to) | 117-149 |

Number of pages | 33 |

Journal | Machine Learning |

Volume | 25 |

Issue number | 2-3 |

DOIs | |

State | Published - 1996 |

Externally published | Yes |

## Keywords

- Learning distributions
- Markov models
- Probabilistic automata
- Suffix trees
- Text correction