## Abstract

We analyze in detail the performance of a Hamming network classifying inputs that are distorted versions of one of its m stored memory patterns, each being a binary vector of length n. It is shown that the activation function of the memory neurons in the original Hamming network may be replaced by a simple threshold function. By judiciously determining the threshold value, the “winner-take-all” subnet of the Hamming network (known to be the essential factor determining the time complexity of the network's computation) may be altogether discarded. For m growing exponentially in n, the resulting Threshold Hamming Network correctly classifies the input pattern in a single iteration, with probability approaching 1.

Original language | English |
---|---|

Pages (from-to) | 261-266 |

Number of pages | 6 |

Journal | IEEE Transactions on Neural Networks |

Volume | 6 |

Issue number | 1 |

DOIs | |

State | Published - Jan 1995 |