Neural network integral representations with the ReLU activation function.
Armenak PetrosyanAnton DereventsovClayton G. WebsterPublished in: MSML (2020)
Keyphrases
- activation function
- neural network
- back propagation
- artificial neural networks
- hidden layer
- feed forward neural networks
- feed forward
- hidden neurons
- neural nets
- connection weights
- single hidden layer
- sigmoid function
- chaotic neural network
- hidden nodes
- feedforward neural networks
- learning rate
- backpropagation algorithm
- network architecture
- pattern recognition
- recurrent neural networks
- fuzzy neural network
- neural network model
- multilayer perceptron
- multi layer perceptron
- training phase
- bp neural network
- single layer
- extreme learning machine
- training algorithm
- radial basis function
- basis functions
- rbf neural network
- trained neural network
- fuzzy logic
- hidden units
- input output
- knn