Gradient-only surrogate to resolve learning rates for robust and consistent training of deep neural networks.
Younghwan ChaeDaniel N. WilkeDominic KafkaPublished in: Appl. Intell. (2023)
Keyphrases
- learning rate
- backpropagation algorithm
- neural network
- training algorithm
- back propagation
- feed forward neural networks
- hidden layer
- activation function
- feedforward neural networks
- error function
- multilayer perceptron
- training process
- artificial neural networks
- convergence rate
- learning algorithm
- adaptive learning rate
- multi layer perceptron
- bp algorithm
- genetic algorithm
- training set
- training examples
- gaussian kernels
- covering numbers
- learning theory
- uniform convergence
- recurrent neural networks
- training samples
- feature vectors
- feature selection
- machine learning
- data mining