Training Neural Networks by Using Power Linear Units (PoLUs).
Yikang LiPak Lun Kevin DingBaoxin LiPublished in: CoRR (2018)
Keyphrases
- neural network
- training process
- training algorithm
- feedforward neural networks
- feed forward neural networks
- backpropagation algorithm
- neural network training
- multi layer perceptron
- neural network structure
- pattern recognition
- training set
- artificial neural networks
- power consumption
- linear svm
- back propagation
- multi layer
- training phase
- hidden layer
- recurrent networks
- error back propagation
- fuzzy systems
- recurrent neural networks
- test set
- online learning
- fuzzy logic
- genetic algorithm
- network architecture
- activation function
- radial basis function
- hidden units
- training patterns
- evolutionary algorithm
- feature selection
- spatial filters