Using function approximation to analyze the sensitivity of MLP with antisymmetric squashing activation function.
Daniel S. YeungXuequan SunPublished in: IEEE Trans. Neural Networks (2002)
Keyphrases
- function approximation
- activation function
- hidden nodes
- radial basis function
- multilayer perceptron
- hidden layer
- neural network
- multi layer perceptron
- reinforcement learning
- artificial neural networks
- rbf neural network
- feed forward neural networks
- hidden neurons
- rbf network
- feed forward
- back propagation
- basis functions
- learning rate
- rbfnn
- neural nets
- learning tasks
- feedforward neural networks
- support vector
- machine learning
- linear combination
- small number