Bounded isotonic regression

Ronny Luss, Saharon Rosset

Research output: Contribution to journalArticlepeer-review

Abstract

Isotonic regression offers a flexible modeling approach under monotonicity assumptions, which are natural in many applications. Despite this attractive setting and extensive theoretical research, isotonic regression has enjoyed limited interest in practical modeling primarily due to its tendency to suffer significant overfitting, even in moderate dimension, as the monotonicity constraints do not offer sufficient complexity control. Here we propose to regularize isotonic regression by penalizing or constraining the range of the fitted model (i.e., the difference between the maximal and minimal predictions). We show that the optimal solution to this problem is obtained by constraining the non-penalized isotonic regression model to lie in the required range, and hence can be found easily given this nonpenalized solution. This makes our approach applicable to large datasets and to generalized loss functions such as Huber’s loss or exponential family log-likelihoods. We also show how the problem can be reformulated as a Lasso problem in a very high dimensional basis of upper sets. Hence, range regularization inherits some of the statistical properties of Lasso, notably its degrees of freedom estimation. We demonstrate the favorable empirical performance of our approach compared to various relevant alternatives.

Original languageEnglish
Pages (from-to)4488-4514
Number of pages27
JournalElectronic Journal of Statistics
Volume11
Issue number2
DOIs
StatePublished - 2017

Keywords

  • Lasso regularization
  • Multivariate isotonic regression
  • Nonparametric regression
  • Range regularization
  • Regularization path

Fingerprint

Dive into the research topics of 'Bounded isotonic regression'. Together they form a unique fingerprint.

Cite this