On the problem of on-line learning with log-loss

Yaniv Fogel, Meir Feder

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

In this paper we consider the problem of on-line learning with respect to the logarithmic loss, where the learner provides a probability assignment for the next label given the past and current data samples and the past labels. We consider the problem in the individual and the stochastic settings. Our first result is a class of new universal on-line probability assignment schemes based on the mixture approach. Now, in classical learning, it is well known that there are model classes that can be learned in batch, but cannot be learned sequentially for all data samples sequences. We show that for these model classes the proposed mixture schemes lead to a vanishing regret in the individual setting when the adversary is somewhat constrained. In the stochastic setting we show that any on-line solution for the log-loss may be used to obtain a solution for a wide variety of loss functions.

Original languageEnglish
Title of host publication2017 IEEE International Symposium on Information Theory, ISIT 2017
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages2995-2999
Number of pages5
ISBN (Electronic)9781509040964
DOIs
StatePublished - 9 Aug 2017
Event2017 IEEE International Symposium on Information Theory, ISIT 2017 - Aachen, Germany
Duration: 25 Jun 201730 Jun 2017

Publication series

NameIEEE International Symposium on Information Theory - Proceedings
ISSN (Print)2157-8095

Conference

Conference2017 IEEE International Symposium on Information Theory, ISIT 2017
Country/TerritoryGermany
CityAachen
Period25/06/1730/06/17

Fingerprint

Dive into the research topics of 'On the problem of on-line learning with log-loss'. Together they form a unique fingerprint.

Cite this