Optimal learning rates for each pattern and neuron in gradient descent training of multilayer perceptrons.
Sang-Hoon OhSoo-Young LeePublished in: IJCNN (1999)
Keyphrases
- backpropagation algorithm
- multilayer perceptron
- learning rate
- activation function
- error function
- hidden layer
- neural network
- feedforward neural networks
- number of hidden neurons
- back propagation
- feed forward neural networks
- training algorithm
- training speed
- bp algorithm
- artificial neural networks
- radial basis function
- convergence rate
- learning algorithm
- radial basis function network
- single hidden layer
- hidden neurons
- gaussian kernels
- multi layer perceptron
- objective function
- extreme learning machine
- optimal solution
- feature extraction
- uniform convergence
- genetic algorithm
- multilayer neural network
- dynamic programming
- decision trees
- training process
- pattern recognition
- rbf neural network
- convergence speed
- recurrent neural networks
- feed forward
- worst case
- fuzzy logic