Gradient Descent for One-Hidden-Layer Neural Networks: Polynomial Convergence and SQ Lower Bounds.
Santosh S. VempalaJohn WilmesPublished in: COLT (2019)
Keyphrases
- hidden layer
- neural network
- lower bound
- back propagation
- statistical queries
- activation function
- objective function
- multilayer perceptron
- feed forward
- upper bound
- feed forward neural networks
- error function
- feedforward neural networks
- artificial neural networks
- learning rate
- recurrent neural networks
- neural nets
- radial basis function
- rbf neural network
- training algorithm
- convergence rate
- backpropagation algorithm
- neural network structure
- connection weights
- learning rules
- hidden neurons
- output layer
- number of hidden layers
- fuzzy neural network
- optimal solution
- pattern recognition
- single layer
- convergence speed
- hidden units
- hidden nodes
- fuzzy logic
- vc dimension
- neural network model
- bp neural network
- fault diagnosis
- latent variables
- concept class
- multi layer
- radial basis function neural network
- multi layer perceptron
- rbf network
- network architecture
- image segmentation
- feature selection
- artificial intelligence
- learning algorithm
- machine learning