Algorithmic stability and sanity-check bounds for leave-one-out cross-validation

Michael Kearns*, Dana Ron

*Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review

Abstract

Sanity-check bounds were proven for the error of the leave-one-out cross-validation estimate of the generalization error. Any nontrivial bound on the error of leave-one-out relies on some notion of algorithmic stability. A weaker notion of error stability was applied to obtain sanity-check bounds for leave-one-out for other classes of learning algorithms, including training error minimization procedures and Bayesian algorithms. The necessity of error stability for good performance by the leave-one-out estimate was demonstrated by the lower bounds, and the fact that for training error minimization algorithms in the worst case bounds, still depends on the Vapnik-Chervonenkis dimension of the hypothesis class.

Original languageEnglish
Pages152-162
Number of pages11
DOIs
StatePublished - 1997
Externally publishedYes
EventProceedings of the 1997 10th Annual Conference on Computational Learning Theory - Nashville, TN, USA
Duration: 6 Jul 19979 Jul 1997

Conference

ConferenceProceedings of the 1997 10th Annual Conference on Computational Learning Theory
CityNashville, TN, USA
Period6/07/979/07/97

Fingerprint

Dive into the research topics of 'Algorithmic stability and sanity-check bounds for leave-one-out cross-validation'. Together they form a unique fingerprint.

Cite this