Training memristor-based multilayer neuromorphic networks with SGD, momentum and adaptive learning rates.
Neural networks implemented with traditional hardware face inherent limitation of memory latency. Specifically, the processing units like GPUs, FPGAs, and customized ASICs, must wait for inputs to read from memory and outputs to write back. This motivates memristor-based neuromorphic computing in which the memory units (i.e., memristors) have computing capabilities. However, training a memristor-based neural network is difficult since memristors work differently from CMOS hardware. This paper proposes a new training approach that enables prevailing neural network training techniques to be applied for memristor-based neuromorphic networks. Particularly, we introduce momentum and adaptive learning rate to the circuit training, both of which are proven methods that significantly accelerate the convergence of neural network parameters. Furthermore, we show that this circuit can be used for neural networks with arbitrary numbers of layers, neurons, and parameters. Simulation results on four classification tasks demonstrate that the proposed circuit achieves both high accuracy and fast speed. Compared with the SGD-based training circuit, on the WBC data set, the training speed of our circuit is increased by 37.2% while the accuracy is only reduced by 0.77%. On the MNIST data set, the new circuit even leads to improved accuracy.
Duke Scholars
Published In
DOI
EISSN
ISSN
Publication Date
Volume
Start / End Page
Related Subject Headings
- Neurons
- Neural Networks, Computer
- Motion
- Learning
- Humans
- Artificial Intelligence & Image Processing
- 4905 Statistics
- 4611 Machine learning
- 4602 Artificial intelligence
Citation
Published In
DOI
EISSN
ISSN
Publication Date
Volume
Start / End Page
Related Subject Headings
- Neurons
- Neural Networks, Computer
- Motion
- Learning
- Humans
- Artificial Intelligence & Image Processing
- 4905 Statistics
- 4611 Machine learning
- 4602 Artificial intelligence