Abstract
In a recent paper we presented a methodological framework describing the two iteration performance of Hopfield-like attractor neural networks with history-dependent Bayesian dynamics. We now extend this analysis in a number of directions: input patterns applied to small subsets of neurons, general connectivity architectures and more efficient use of history. We show that the optimal signal (activation) function has a slanted sigmoidal shape, and provide an intuitive mount of activation functions with a non-monotone shape. This function endows the analytical model with some properties characteristic of cortical neurons' firing.
Original language | English |
---|---|
Pages (from-to) | 277-298 |
Number of pages | 22 |
Journal | Network: Computation in Neural Systems |
Volume | 5 |
Issue number | 2 |
DOIs | |
State | Published - 1994 |