How degenerate is the parametrization of neural networks with the ReLU activation function?
Dennis ElbrächterJulius BernerPhilipp GrohsPublished in: NeurIPS (2019)
Keyphrases
- activation function
- neural network
- artificial neural networks
- feed forward
- back propagation
- hidden layer
- feed forward neural networks
- multilayer perceptron
- hidden neurons
- learning rate
- radial basis function
- neural nets
- single hidden layer
- backpropagation algorithm
- feedforward neural networks
- basis functions
- network architecture
- fuzzy neural network
- multi layer perceptron
- chaotic neural network
- hidden nodes
- single layer
- sigmoid function
- pattern recognition
- training phase
- extreme learning machine
- neural network model
- fuzzy logic
- multi layer
- genetic algorithm
- hidden units
- convergence speed
- fault diagnosis