Information-theoretic algorithm for feature selection

Mark Last, Abraham Kandel*, Oded Maimon

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Feature selection is used to improve the efficiency of learning algorithms by finding an optimal subset of features. However, most feature selection techniques can handle only certain types of data. Additional limitations of existing methods include intensive computational requirements and inability to identify redundant variables. In this paper, we present a novel, information-theoretic algorithm for feature selection, which finds an optimal set of attributes by removing both irrelevant and redundant features. The algorithm has a polynomial computational complexity and is applicable to datasets of a mixed nature. The method performance is evaluated on several benchmark datasets by using a standard classifier (C4.5).

Original languageEnglish
Pages (from-to)799-811
Number of pages13
JournalPattern Recognition Letters
Volume22
Issue number6-7
DOIs
StatePublished - May 2001

Funding

FundersFunder number
USF Center for Software Testing2108-004-00

    Keywords

    • Classification
    • Feature selection
    • Information theory
    • Information-theoretic network

    Fingerprint

    Dive into the research topics of 'Information-theoretic algorithm for feature selection'. Together they form a unique fingerprint.

    Cite this