Impact of ratio k on two-layer neural networks with dynamic optimal learning rate.
Tong ZhangC. L. Philip ChenJin ZhouPublished in: IJCNN (2014)
Keyphrases
- learning rate
- neural network
- hidden layer
- multi layer
- activation function
- training algorithm
- error function
- convergence rate
- learning algorithm
- convergence speed
- adaptive learning rate
- rapid convergence
- feed forward neural networks
- artificial neural networks
- middle layer
- output layer
- weight vector
- fuzzy neural network
- genetic algorithm
- back propagation
- training process
- neural network model
- convergence theorem
- optimal solution
- network architecture
- recurrent neural networks
- multilayer perceptron
- genetic algorithm ga
- fuzzy logic
- training set
- multilayer neural networks