Abstract
Sanity-check bounds were proven for the error of the leave-one-out cross-validation estimate of the generalization error. Any nontrivial bound on the error of leave-one-out relies on some notion of algorithmic stability. A weaker notion of error stability was applied to obtain sanity-check bounds for leave-one-out for other classes of learning algorithms, including training error minimization procedures and Bayesian algorithms. The necessity of error stability for good performance by the leave-one-out estimate was demonstrated by the lower bounds, and the fact that for training error minimization algorithms in the worst case bounds, still depends on the Vapnik-Chervonenkis dimension of the hypothesis class.
Original language | English |
---|---|
Pages | 152-162 |
Number of pages | 11 |
DOIs | |
State | Published - 1997 |
Externally published | Yes |
Event | Proceedings of the 1997 10th Annual Conference on Computational Learning Theory - Nashville, TN, USA Duration: 6 Jul 1997 → 9 Jul 1997 |
Conference
Conference | Proceedings of the 1997 10th Annual Conference on Computational Learning Theory |
---|---|
City | Nashville, TN, USA |
Period | 6/07/97 → 9/07/97 |