Learning low-dimensional representations via the usage of multiple-class labels

Nathan Intrator*, Shimon Edelman

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

11 Scopus citations

Abstract

Learning to recognize visual objects from examples requires the ability to find meaningful patterns in spaces of very high dimensionality. We present a method for dimensionality reduction which effectively biases the learning system by combining multiple constraints via the use of class labels. The use of extensive class labels steers the resulting low-dimensional representation to become invariant to those directions of variation in the input space that are irrelevant to classification; this is done merely by making class labels independent of these directions. We also show that prior knowledge of the proper dimensionality of the target representation can be imposed by training a multi-layer bottleneck network. Computational experiments involving non-trivial categorization of parameterized fractal images and of human faces indicate that the low-dimensional representation extracted by our method leads to improved generalization in the learned tasks and is likely to preserve the topology of the original space.

Original languageEnglish
Pages (from-to)259-281
Number of pages23
JournalNetwork: Computation in Neural Systems
Volume8
Issue number3
DOIs
StatePublished - Aug 1997

Fingerprint

Dive into the research topics of 'Learning low-dimensional representations via the usage of multiple-class labels'. Together they form a unique fingerprint.

Cite this