A proof of convergence for gradient descent in the training of artificial neural networks for constant target functions.
Patrick CheriditoArnulf JentzenAdrian RiekertFlorian RossmannekPublished in: J. Complex. (2022)
Keyphrases
- artificial neural networks
- feed forward neural networks
- neural network
- back propagation
- multi layer perceptron
- cost function
- update rule
- training algorithm
- benchmark classification problems
- evolutionary artificial neural networks
- feedforward neural networks
- computational intelligence
- training examples
- theorem proving
- feed forward
- neural network model
- training set
- multilayer perceptron
- convergence speed
- genetic algorithm
- training process
- feedforward artificial neural networks
- functional language
- hidden neurons
- stochastic gradient descent
- training speed
- machine learning
- operator splitting
- proof planning
- decision trees
- faster convergence
- support vector
- learning rules
- active learning
- moving target
- radial basis function
- multi layer
- theorem prover