Universal Learning of Individual Data

Yaniv Fogel, Meir Feder

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review


Universal supervised learning of individual data is considered from an information theoretic point of view in the standard supervised 'batch' learning where prediction is done on a test sample once the entire training data is observed. In this individual setting the features and labels, both in the training and the test, are specific individual, deterministic quantities. Prediction loss is naturally measured by the log-loss. The presented results provide a minimax universal learning scheme, termed the Predictive Normalized Maximum Likelihood (pNML) that competes with a 'genie' (or reference) that knows the true test label. In addition, a pointwise learnability measure associated with the pNML, for the specific training and test, is provided. This measure may also indicate the performance of the commonly used Empirical Risk Minimizer (ERM) learner.

Original languageEnglish
Title of host publication2019 IEEE International Symposium on Information Theory, ISIT 2019 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Number of pages5
ISBN (Electronic)9781538692912
StatePublished - Jul 2019
Event2019 IEEE International Symposium on Information Theory, ISIT 2019 - Paris, France
Duration: 7 Jul 201912 Jul 2019

Publication series

NameIEEE International Symposium on Information Theory - Proceedings
ISSN (Print)2157-8095


Conference2019 IEEE International Symposium on Information Theory, ISIT 2019


Dive into the research topics of 'Universal Learning of Individual Data'. Together they form a unique fingerprint.

Cite this