A limitation of the PAC-Bayes framework

Roi Livni, Shay Moran

Research output: Contribution to journalConference articlepeer-review

15 Scopus citations

Abstract

PAC-Bayes is a useful framework for deriving generalization bounds which was introduced by McAllester (’98). This framework has the flexibility of deriving distribution- and algorithm-dependent bounds, which are often tighter than VC-related uniform convergence bounds. In this manuscript we present a limitation for the PAC-Bayes framework. We demonstrate an easy learning task which is not amenable to a PAC-Bayes analysis. Specifically, we consider the task of linear classification in 1D; it is well-known that this task is learnable using just O(log(1/d)/e) examples. On the other hand, we show that this fact can not be proved using a PAC-Bayes analysis: for any algorithm that learns 1-dimensional linear classifiers there exists a (realizable) distribution for which the PAC-Bayes bound is arbitrarily large.

Original languageEnglish
JournalAdvances in Neural Information Processing Systems
Volume2020-December
StatePublished - 2020
Event34th Conference on Neural Information Processing Systems, NeurIPS 2020 - Virtual, Online
Duration: 6 Dec 202012 Dec 2020

Funding

FundersFunder number
Google Research
United States - Israel Binational Science Foundation
United States-Israel Binational Science Foundation
Israel Science Foundation1225/20, 2188/20

    Fingerprint

    Dive into the research topics of 'A limitation of the PAC-Bayes framework'. Together they form a unique fingerprint.

    Cite this