Sparsity and smoothness via the fused lasso

Robert Tibshirani*, Michael Saunders, Saharon Rosset, Ji Zhu, Keith Knight

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review


The lasso penalizes a least squares regression by the sum of the absolute values (L1 -norm) of the coefficients. The form of this penalty encourages sparse solutions (with many coefficients equal to 0). We propose the 'fused lasso', a generalization that is designed for problems with features that can be ordered in some meaningful way. The fused lasso penalizes the L 1 -norm of both the coefficients and their successive differences. Thus it encourages sparsity of the coefficients and also sparsity of their differences - i.e. local constancy of the coefficient profile. The fused lasso is especially useful when the number of features p is much greater than N, the sample size. The technique is also extended to the 'hinge' loss function that underlies the support vector classifier. We illustrate the methods on examples from protein mass spectroscopy and gene expression data.

Original languageEnglish
Pages (from-to)91-108
Number of pages18
JournalJournal of the Royal Statistical Society. Series B: Statistical Methodology
Issue number1
StatePublished - 2005
Externally publishedYes


  • Fused lasso
  • Gene expression
  • Lasso
  • Least squares regression
  • Protein mass spectroscopy
  • Sparse solutions
  • Support vector classifier


Dive into the research topics of 'Sparsity and smoothness via the fused lasso'. Together they form a unique fingerprint.

Cite this