A proof of convergence for gradient descent in the training of artificial neural networks for constant target functions.
Patrick CheriditoArnulf JentzenAdrian RiekertFlorian RossmannekPublished in: CoRR (2021)
Keyphrases
- artificial neural networks
- feed forward neural networks
- neural network
- multi layer perceptron
- evolutionary artificial neural networks
- computational intelligence
- training algorithm
- back propagation
- feedforward artificial neural networks
- training set
- update rule
- cost function
- convergence speed
- benchmark classification problems
- feed forward
- convergence rate
- loss function
- neural network model
- soft computing
- learning rate
- training process
- objective function
- hidden neurons
- training samples
- evolutionary algorithm
- conjugate gradient
- using artificial neural networks
- machine learning
- genetic algorithm
- convergence analysis
- stochastic gradient descent
- error function
- feedforward neural networks
- supervised learning
- learning rules
- training phase
- training examples
- hidden layer
- radial basis function