Small nonlinearities in activation functions create bad local minima in neural networks.
Chulhee YunSuvrit SraAli JadbabaiePublished in: ICLR (Poster) (2019)
Keyphrases
- activation function
- neural network
- feed forward
- artificial neural networks
- feed forward neural networks
- hidden layer
- back propagation
- neural nets
- nonlinear functions
- hidden neurons
- feedforward neural networks
- multilayer perceptron
- neural architecture
- connection weights
- learning rate
- network architecture
- hidden nodes
- fuzzy logic
- pattern recognition
- basis functions
- radial basis function
- small number
- genetic algorithm
- evolutionary algorithm
- multi layer perceptron
- training phase
- learning algorithm
- global exponential stability
- active learning
- rbf neural network
- simulated annealing