TanhExp: A Smooth Activation Function with High Convergence Speed for Lightweight Neural Networks.
Xinyu LiuXiaoguang DiPublished in: CoRR (2020)
Keyphrases
- lightweight
- convergence speed
- activation function
- learning rate
- neural network
- convergence rate
- hidden layer
- particle swarm optimization algorithm
- back propagation
- feed forward neural networks
- feed forward
- differential evolution
- error function
- particle swarm optimization
- hidden neurons
- artificial neural networks
- step size
- bp algorithm
- multilayer perceptron
- feedforward neural networks
- neural nets
- network architecture
- pso algorithm
- pattern recognition
- training algorithm
- basis functions
- wireless sensor networks
- multi layer perceptron
- training phase
- steady state error
- fuzzy neural network
- neural network model
- hidden nodes
- extreme learning machine
- training speed