Improving the learning rate of back-propagation with the gradient reuse algorithm.
Don R. HushJohn M. SalasPublished in: ICNN (1988)
Keyphrases
- bp algorithm
- back propagation
- learning rate
- training algorithm
- learning algorithm
- convergence rate
- neural network
- levenberg marquardt
- error function
- feed forward neural networks
- weight update
- hidden layer
- adaptive learning rate
- steepest descent method
- optimization algorithm
- artificial neural networks
- activation function
- particle swarm optimization
- gradient vector
- rapid convergence
- feedforward neural networks
- backpropagation algorithm
- radial basis function
- cost function
- delta bar delta
- evolutionary algorithm
- fuzzy neural network
- artificial intelligence
- feature selection
- improved algorithm
- step size
- multilayer perceptron
- fuzzy logic