Explicit loss asymptotics in the gradient descent training of neural networks.
Maksim VelikanovDmitry YarotskyPublished in: NeurIPS (2021)
Keyphrases
- neural network
- training algorithm
- training process
- feedforward neural networks
- back propagation
- pattern recognition
- conjugate gradient
- feed forward neural networks
- multi layer perceptron
- artificial neural networks
- backpropagation algorithm
- multi layer
- error back propagation
- neural network training
- training patterns
- sufficient conditions
- markov chain
- neural nets
- self organizing maps
- radial basis function network
- training set
- training data
- multilayer perceptron
- learning rules
- training examples
- error function
- semi supervised
- fuzzy logic