Universal sequential learning and decision from individual data sequences

Neri Merhav, Meir Feder

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Sequential learning and decision algorithms are investigated, with various application areas, under a family of additive loss functions for individual data sequences. Simple universal sequential schemes are known, under certain conditions, to approach optimality uniformly as fast as n-1 log n, where n is the sample size. For the case of finite-alphabet observations, the class of schemes that can be implemented by finite-state machines (FSM's) is studied. It is shown that Markovian machines with sufficiently long memory exist that are asymptotically nearly as good as any given FSM (deterministic or randomized) for the purpose of sequential decision. For the continuous-valued observation case, a useful class of parametric schemes is discussed with special attention to the recursive least squares (RLS) algorithm.

Original languageEnglish
Title of host publicationProceedings of the Fifth Annual ACM Workshop on Computational Learning Theory
PublisherAssociation for Computing Machinery (ACM)
Pages413-427
Number of pages15
ISBN (Print)089791497X, 9780897914970
DOIs
StatePublished - 1992
Externally publishedYes
EventProceedings of the Fifth Annual ACM Workshop on Computational Learning Theory - Pittsburgh, PA, USA
Duration: 27 Jul 199229 Jul 1992

Publication series

NameProceedings of the Fifth Annual ACM Workshop on Computational Learning Theory

Conference

ConferenceProceedings of the Fifth Annual ACM Workshop on Computational Learning Theory
CityPittsburgh, PA, USA
Period27/07/9229/07/92

Fingerprint

Dive into the research topics of 'Universal sequential learning and decision from individual data sequences'. Together they form a unique fingerprint.

Cite this