TanhExp: A smooth activation function with high convergence speed for lightweight neural networks.
Xinyu LiuXiaoguang DiPublished in: IET Comput. Vis. (2021)
Keyphrases
- lightweight
- convergence speed
- activation function
- learning rate
- neural network
- hidden layer
- convergence rate
- particle swarm optimization algorithm
- artificial neural networks
- back propagation
- differential evolution
- error function
- feed forward neural networks
- step size
- particle swarm optimization
- hidden neurons
- feed forward
- multilayer perceptron
- neural nets
- feedforward neural networks
- steady state error
- radial basis function
- network architecture
- hidden nodes
- single neuron
- bp algorithm
- training algorithm
- wireless sensor networks
- pso algorithm
- optimization algorithm
- fuzzy neural network
- rbf neural network
- basis functions
- multi layer
- multi layer perceptron
- multi objective
- neural network model
- fault diagnosis
- particle swarm optimization pso