Optimal signalling in attractor neural networks

Isaac Meilijson*, Eytan Ruppin

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

5 Scopus citations

Abstract

In a recent paper we presented a methodological framework describing the two iteration performance of Hopfield-like attractor neural networks with history-dependent Bayesian dynamics. We now extend this analysis in a number of directions: input patterns applied to small subsets of neurons, general connectivity architectures and more efficient use of history. We show that the optimal signal (activation) function has a slanted sigmoidal shape, and provide an intuitive mount of activation functions with a non-monotone shape. This function endows the analytical model with some properties characteristic of cortical neurons' firing.

Original languageEnglish
Pages (from-to)277-298
Number of pages22
JournalNetwork: Computation in Neural Systems
Volume5
Issue number2
DOIs
StatePublished - 1994

Fingerprint

Dive into the research topics of 'Optimal signalling in attractor neural networks'. Together they form a unique fingerprint.

Cite this