Optimal universal learning and prediction of probabilistic concepts

Meir Feder*, Yoav Freund, Yishay Mansour

*Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review

5 Scopus citations

Abstract

A setup of the supervised learning problem is considered. Through this a universal predictor is sought. A classical result in universal coding (Gallager, 1976) states that the encoder attains the min-max redundancy. Recently, Merhav et al (1995) have shown that the performance of this predictor is a lower bound on the performance of any universal coder. In this paper, the proposed solution for the supervised learning problem is Bayesian, allowing the determination of an optimal way to choose the Bayesian 'prior' for the supervised learning problem, and observing the strong sequential, non-anticipating, structure of the resulting predictor.

Original languageEnglish
Pages233
Number of pages1
StatePublished - 1995
EventProceedings of the 1995 IEEE International Symposium on Information Theory - Whistler, BC, Can
Duration: 17 Sep 199522 Sep 1995

Conference

ConferenceProceedings of the 1995 IEEE International Symposium on Information Theory
CityWhistler, BC, Can
Period17/09/9522/09/95

Fingerprint

Dive into the research topics of 'Optimal universal learning and prediction of probabilistic concepts'. Together they form a unique fingerprint.

Cite this