Towards moderate overparameterization: global convergence guarantees for training shallow neural networks.
Samet OymakMahdi SoltanolkotabiPublished in: CoRR (2019)
Keyphrases
- global convergence
- neural network
- training process
- convergence speed
- convergence rate
- training algorithm
- global optimum
- convergence analysis
- feedforward neural networks
- optimization methods
- convex minimization
- constrained optimization problems
- neural network training
- artificial neural networks
- feed forward neural networks
- multi layer perceptron
- fuzzy logic
- genetic algorithm
- coordinate ascent
- globally convergent
- training set
- hidden layer
- back propagation
- scheduling problem
- artificial intelligence
- line search
- supervised learning
- question answering
- recurrent neural networks