Design architectures and training of neural networks with a distributed genetic algorithm

S. Oliker*, M. Furst, O. Maimon

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

4 Scopus citations

Abstract

Designing and training neural networks using a distributed genetic algorithm reinforced by the perceptron learning rule is shown. The method sets the neural network's architecture and weights for a given task where the network comprised of binary, linear threshold units. For the genetic algorithm we defined an objective function (fitness) which considers for each unit primarily, the overall network error, and secondarily, the unit's possible connections and weights that are preferable for continuity of the convergence process. Simultaneously, on purpose to accelerate the learning process, we use the perceptron learning rule to search a better unit input connection weights set. Examples are given showing the potential of the proposed approach.

Original languageEnglish
Title of host publication1993 IEEE International Conference on Neural Networks
Editors Anon
PublisherPubl by IEEE
Pages199-202
Number of pages4
ISBN (Print)0780312007
StatePublished - 1993
Event1993 IEEE International Conference on Neural Networks - San Francisco, CA, USA
Duration: 28 Mar 19931 Apr 1993

Publication series

Name1993 IEEE International Conference on Neural Networks

Conference

Conference1993 IEEE International Conference on Neural Networks
CitySan Francisco, CA, USA
Period28/03/931/04/93

Fingerprint

Dive into the research topics of 'Design architectures and training of neural networks with a distributed genetic algorithm'. Together they form a unique fingerprint.

Cite this