SMU: smooth activation function for deep networks using smoothing maximum technique.
Koushik BiswasSandeep KumarShilpak BanerjeeAshish Kumar PandeyPublished in: CoRR (2021)
Keyphrases
- activation function
- network size
- neural network
- hidden layer
- artificial neural networks
- feed forward
- back propagation
- feed forward neural networks
- multilayer perceptron
- chaotic neural network
- neural nets
- fuzzy neural network
- feedforward neural networks
- hidden nodes
- learning rate
- sigmoid function
- rbf neural network
- radial basis function
- network structure
- basis functions
- machine learning
- single layer
- classification algorithm