A Single-Iteration Threshold Hamming Network

Research output: Contribution to journalArticlepeer-review

Abstract

We analyze in detail the performance of a Hamming network classifying inputs that are distorted versions of one of its m stored memory patterns, each being a binary vector of length n. It is shown that the activation function of the memory neurons in the original Hamming network may be replaced by a simple threshold function. By judiciously determining the threshold value, the “winner-take-all” subnet of the Hamming network (known to be the essential factor determining the time complexity of the network's computation) may be altogether discarded. For m growing exponentially in n, the resulting Threshold Hamming Network correctly classifies the input pattern in a single iteration, with probability approaching 1.

Original languageEnglish
Pages (from-to)261-266
Number of pages6
JournalIEEE Transactions on Neural Networks
Volume6
Issue number1
DOIs
StatePublished - Jan 1995

Fingerprint

Dive into the research topics of 'A Single-Iteration Threshold Hamming Network'. Together they form a unique fingerprint.

Cite this