Universal scaling laws in the gradient descent training of neural networks.
Maksim VelikanovDmitry YarotskyPublished in: CoRR (2021)
Keyphrases
- neural network
- training process
- training algorithm
- multi layer perceptron
- feedforward neural networks
- training patterns
- back propagation
- artificial neural networks
- cost function
- training phase
- conjugate gradient
- backpropagation algorithm
- learning rules
- feed forward neural networks
- pattern recognition
- error back propagation
- neural network training
- fault diagnosis
- loss function
- training set
- radial basis function network
- recurrent networks
- genetic algorithm
- training samples
- fuzzy logic
- objective function
- artificial intelligence
- activation function
- feed forward
- neural network model
- error function
- stochastic gradient descent
- active learning
- reinforcement learning
- image processing
- computer vision
- neural network structure