History-dependent attractor neural networks

Isaac Meilijson*, Eytan Ruppin

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

We present a methodological framework enabling a detailed description of the performance of Hopfield-like attractor neural networks (ANN) in the first two iterations. Using the Bayesian approach, we find that performance is improved when a history-based term is included in the neuron's dynamics. A further enhancement of the network's performance is achieved by judiciously choosing the censored neurons (those which become active in a given iteration) on the basis of the magnitude of their post-synaptic potentials. The contribution of biologically plausible, censored, history-dependent dynamics is especially marked in conditions of low firing activity and sparse connectivity, two important characteristics of the mammalian cortex. In such networks, the performance attained is higher than the performance of two 'independent' iterations, which represents an upper bound on the performance of history-independent networks.

Original languageEnglish
Pages (from-to)195-221
Number of pages27
JournalNetwork: Computation in Neural Systems
Volume4
Issue number2
DOIs
StatePublished - 1993

Fingerprint

Dive into the research topics of 'History-dependent attractor neural networks'. Together they form a unique fingerprint.

Cite this