
Adequate input for learning in attractor neural networks
In the context of learning in attractor neural networks (ANN) the authors discuss the issue of the constraints imposed by there requirements that the afferents arriving at the neurons in the attractor network from the stimulus, compete successfully with the afferents generated by the recurrent activity inside the network, in a situation in which both sets of synaptic efficacies are weak and approximately equal. We simulate and analyse a two-component network: one representing the stimulus, the other an ANN. They show that if stimuli art correlated with the receptive fields of neurons in the ANN, and are of sufficient contrast, the stimulus can provide the necessary information to the recurrent network to allow learning new stimulus, even in the very disfavoured situation of synaptic predominance in the recurrent part. Stimuli which are insufficiently correlated with the receptive fields, or are of insufficient contrast, are submerged by the recurrent activity. © 1993 Informa UK Ltd All rights reserved: reproduction in whole or part not permitted.
Duke Scholars
Published In
DOI
ISSN
Publication Date
Volume
Issue
Start / End Page
Related Subject Headings
- Neurology & Neurosurgery
- 4601 Applied computing
- 3209 Neurosciences
- 3202 Clinical sciences
- 17 Psychology and Cognitive Sciences
- 11 Medical and Health Sciences
- 08 Information and Computing Sciences
Citation

Published In
DOI
ISSN
Publication Date
Volume
Issue
Start / End Page
Related Subject Headings
- Neurology & Neurosurgery
- 4601 Applied computing
- 3209 Neurosciences
- 3202 Clinical sciences
- 17 Psychology and Cognitive Sciences
- 11 Medical and Health Sciences
- 08 Information and Computing Sciences