Gradient Descent Learns One-hidden-layer CNN: Don't be Afraid of Spurious Local Minima.
Simon S. DuJason D. LeeYuandong TianAarti SinghBarnabás PóczosPublished in: ICML (2018)
Keyphrases
- hidden layer
- back propagation
- learning rate
- feed forward
- artificial neural networks
- error function
- neural network
- recurrent neural networks
- feedforward neural networks
- activation function
- cost function
- cellular neural networks
- multilayer perceptron
- global minimum
- feed forward neural networks
- radial basis function
- rbf neural network
- neural nets
- latent variables
- training algorithm
- fuzzy neural network
- output layer
- learning rules
- objective function
- simulated annealing
- search space
- hidden nodes
- number of hidden layers
- hidden units
- bp neural network
- input output
- fuzzy logic
- probabilistic model