Sequential prediction under log-loss and misspecification

Meir Feder, Yury Polyanskiy

Research output: Contribution to journalConference articlepeer-review

4 Scopus citations

Abstract

We consider the question of sequential prediction under the log-loss in terms of cumulative regret. Namely, given a hypothesis class of distributions, learner sequentially predicts the (distribution of the) next letter in sequence and its performance is compared to the baseline of the best constant predictor from the hypothesis class. The well-specified case corresponds to an additional assumption that the data-generating distribution belongs to the hypothesis class as well. Here we present results in the more general misspecified case. Due to special properties of the log-loss, the same problem arises in the context of competitive-optimality in density estimation and model selection. For the d-dimensional Gaussian location hypothesis class, we show that cumulative regrets in the well-specified and misspecified cases asymptotically coincide. In other words, we provide an o(1) characterization of the distribution-free (or PAC) regret in this case – the first such result as far as we know. We recall that the worst-case (or individual-sequence) regret in this case is larger by an additive constant d2 + o(1). Surprisingly, neither the traditional Bayesian estimators, nor the Shtarkov’s normalized maximum likelihood achieve the PAC regret and our estimator requires special “robustification” against heavy-tailed data. In addition, we show two general results for misspecified regret: the existence and uniqueness of the optimal estimator, and the bound sandwiching the misspecified regret between well-specified regrets with (asymptotically) close hypotheses classes.

Original languageEnglish
Pages (from-to)1937-1964
Number of pages28
JournalProceedings of Machine Learning Research
Volume134
StatePublished - 2021
Event34th Conference on Learning Theory, COLT 2021 - Boulder, United States
Duration: 15 Aug 202119 Aug 2021

Funding

FundersFunder number
Center for Science of Information
National Science FoundationECCS-1808692, CCF-09-39370
Israel Science Foundation819/20

    Keywords

    • Online learning
    • agnostic learning
    • distribution-free PAC learning
    • log-loss
    • misspecified models
    • sequential probability assignment

    Fingerprint

    Dive into the research topics of 'Sequential prediction under log-loss and misspecification'. Together they form a unique fingerprint.

    Cite this