Feature selection for unsupervised and supervised inference: The emergence of sparsity in a weighted-based approach

Lior Wolf, Annon Shashua

Research output: Contribution to conferencePaperpeer-review

Abstract

The problem of selecting a subset of relevant features in a potentially overwhelming quantity of data is classic and found in many branches of science including - examples in computer vision, text processing and more recently bio-informatics are abundant. In this work we present a definition of "relevancy" based on spectral properties of the Affinity (or Laplacian) of the features' measurement matrix. The feature selection process is then based on a continuous ranking of the features defined by a least-squares optimization process. A remarkable property of the feature relevance function is that sparse solutions for the ranking values naturally emerge as a result of a "biased non-negativity" of a key matrix in the process. As a result, a simple least-squares optimization process converges onto a sparse solution, i.e., a selection of a subset of features which form a local maxima over the relevance function. The feature selection algorithm can be embedded in both unsupervised and supervised inference problems and empirical evidence show that the feature selections typically achieve high accuracy even when only a small fraction of the features are relevant.

Original languageEnglish
Pages378-384
Number of pages7
DOIs
StatePublished - 2003
Externally publishedYes
EventProceedings: Ninth IEEE International Conference on Computer Vision - Nice, France
Duration: 13 Oct 200316 Oct 2003

Conference

ConferenceProceedings: Ninth IEEE International Conference on Computer Vision
Country/TerritoryFrance
CityNice
Period13/10/0316/10/03

Fingerprint

Dive into the research topics of 'Feature selection for unsupervised and supervised inference: The emergence of sparsity in a weighted-based approach'. Together they form a unique fingerprint.

Cite this