When does more regularization imply fewer degrees of freedom? Sufficient conditions and counterexamples

S. Kaufman, S. Rosset

Research output: Contribution to journalArticlepeer-review

9 Scopus citations

Abstract

Regularization aims to improve prediction performance by trading an increase in training error for better agreement between training and prediction errors, which is often captured through decreased degrees of freedom. In this paper we give examples which show that regularization can increase the degrees of freedom in common models, including the lasso and ridge regression. In such situations, both training error and degrees of freedom increase, making the regularization inherently without merit. Two important scenarios are described where the expected reduction in degrees of freedom is guaranteed: all symmetric linear smoothers and convex constrained linear regression models like ridge regression and the lasso, when compared to unconstrained linear regression.

Original languageEnglish
Pages (from-to)771-784
Number of pages14
JournalBiometrika
Volume101
Issue number4
DOIs
StatePublished - 1 Dec 2014

Keywords

  • Degrees of freedom
  • Model selection
  • Optimism
  • Regularization

Fingerprint

Dive into the research topics of 'When does more regularization imply fewer degrees of freedom? Sufficient conditions and counterexamples'. Together they form a unique fingerprint.

Cite this