ℓ Regularization in infinite dimensional feature spaces

Sanaron Rosset*, Grzegorz Swirszcz, Nathan Srebro, Ji Zhu

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

In this paper we discuss the problem of fitting ℓ1 regularized prediction models in infinite (possibly non-countable) dimensional feature spaces. Our main contributions are: a. Deriving a generalization of ℓ1 regularization based on measures which can be applied in non-countable feature spaces; b. Proving that the sparsity property of ℓ1 regularization is maintained in infinite dimensions; c. Devising a path-following algorithm that can generate the set of regularized solutions in "nice" feature spaces; and d. Presenting an example of penalized spline models where this path following algorithm is computationally feasible, and gives encouraging empirical results.

Original languageEnglish
Title of host publicationLearning Theory - 20th Annual Conference on Learning Theory, COLT 2007, Proceedings
Pages544-558
Number of pages15
StatePublished - 2007
Externally publishedYes
Event20th Annual Conference on Learning Theory, COLT 2007 - San Diego, CA, United States
Duration: 13 Jun 200715 Jun 2007

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume4539 LNAI
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference20th Annual Conference on Learning Theory, COLT 2007
Country/TerritoryUnited States
CitySan Diego, CA
Period13/06/0715/06/07

Fingerprint

Dive into the research topics of 'ℓ Regularization in infinite dimensional feature spaces'. Together they form a unique fingerprint.

Cite this