Polynomial Convergence of Gradient Descent for Training One-Hidden-Layer Neural Networks.
Santosh S. VempalaJohn WilmesPublished in: CoRR (2018)
Keyphrases
- hidden layer
- neural network
- feedforward neural networks
- feed forward neural networks
- backpropagation algorithm
- back propagation
- training algorithm
- single hidden layer
- activation function
- number of hidden layers
- error back propagation
- multilayer perceptron
- error function
- feed forward
- learning rate
- neural network structure
- artificial neural networks
- number of hidden neurons
- neural nets
- radial basis function
- output layer
- convergence rate
- recurrent neural networks
- number of hidden units
- learning rules
- connection weights
- rbf neural network
- extreme learning machine
- hidden neurons
- single layer
- hidden units
- fuzzy neural network
- convergence speed
- hidden nodes
- multi layer perceptron
- objective function
- training process
- neural network model
- support vector
- training phase
- multi layer
- bp neural network
- artificial intelligence
- genetic algorithm
- expert systems
- machine learning
- training speed
- radial basis function neural network
- small number
- fuzzy logic
- latent variables
- basis functions