On the cross-validation bias due to unsupervised preprocessing

Amit Moscovich*, Saharon Rosset

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

5 Scopus citations

Abstract

Cross-validation is the de facto standard for predictive model evaluation and selection. In proper use, it provides an unbiased estimate of a model's predictive performance. However, data sets often undergo various forms of data-dependent preprocessing, such as mean-centring, rescaling, dimensionality reduction and outlier removal. It is often believed that such preprocessing stages, if done in an unsupervised manner (that does not incorporate the class labels or response values) are generally safe to do prior to cross-validation. In this paper, we study three commonly practised preprocessing procedures prior to a regression analysis: (i) variance-based feature selection; (ii) grouping of rare categorical features; and (iii) feature rescaling. We demonstrate that unsupervised preprocessing can, in fact, introduce a substantial bias into cross-validation estimates and potentially hurt model selection. This bias may be either positive or negative and its exact magnitude depends on all the parameters of the problem in an intricate manner. Further research is needed to understand the real-world impact of this bias across different application domains, particularly when dealing with small sample sizes and high-dimensional data.

Original languageEnglish
Pages (from-to)1474-1502
Number of pages29
JournalJournal of the Royal Statistical Society. Series B: Statistical Methodology
Volume84
Issue number4
DOIs
StatePublished - Sep 2022

Keywords

  • cross-validation
  • model selection
  • predictive modelling
  • preprocessing

Fingerprint

Dive into the research topics of 'On the cross-validation bias due to unsupervised preprocessing'. Together they form a unique fingerprint.

Cite this