Training memristor-based multilayer neuromorphic networks with SGD, momentum and adaptive learning rates.
Zheng YanJiadong ChenRui HuTingwen HuangYiran ChenShiping WenPublished in: Neural Networks (2020)
Keyphrases
- learning rate
- adaptive learning rate
- stochastic gradient descent
- backpropagation algorithm
- weight vector
- convergence rate
- learning algorithm
- quantum mechanics
- hidden layer
- feed forward neural networks
- convergence speed
- gaussian kernels
- training examples
- uniform convergence
- activation function
- step size
- semi supervised
- particle swarm optimization algorithm
- machine learning algorithms
- training samples
- upper bound