How degenerate is the parametrization of neural networks with the ReLU activation function?
Julius BernerDennis ElbrächterPhilipp GrohsPublished in: CoRR (2019)
Keyphrases
- activation function
- neural network
- feed forward
- hidden layer
- back propagation
- feed forward neural networks
- artificial neural networks
- neural nets
- hidden neurons
- multilayer perceptron
- learning rate
- hidden nodes
- single layer
- network architecture
- single hidden layer
- radial basis function
- fuzzy logic
- fuzzy neural network
- backpropagation algorithm
- chaotic neural network
- feedforward neural networks
- pattern recognition
- training algorithm
- multi layer perceptron
- neural network model
- basis functions
- learning algorithm
- extreme learning machine
- recurrent neural networks
- sigmoid function