Abstract
Regularization aims to improve prediction performance by trading an increase in training error for better agreement between training and prediction errors, which is often captured through decreased degrees of freedom. In this paper we give examples which show that regularization can increase the degrees of freedom in common models, including the lasso and ridge regression. In such situations, both training error and degrees of freedom increase, making the regularization inherently without merit. Two important scenarios are described where the expected reduction in degrees of freedom is guaranteed: all symmetric linear smoothers and convex constrained linear regression models like ridge regression and the lasso, when compared to unconstrained linear regression.
Original language | English |
---|---|
Pages (from-to) | 771-784 |
Number of pages | 14 |
Journal | Biometrika |
Volume | 101 |
Issue number | 4 |
DOIs | |
State | Published - 1 Dec 2014 |
Keywords
- Degrees of freedom
- Model selection
- Optimism
- Regularization