Scalable Partial Explainability in Neural Networks via Flexible Activation Functions (Student Abstract).
Schyler Chengyao SunChen LiZhuangkun WeiAntonios TsourdosWeisi GuoPublished in: AAAI (2021)
Keyphrases
- activation function
- neural network
- artificial neural networks
- back propagation
- hidden layer
- feed forward neural networks
- connection weights
- multilayer perceptron
- feed forward
- neural nets
- feedforward neural networks
- neural architecture
- hidden neurons
- hidden nodes
- learning rate
- learning process
- basis functions
- network architecture
- radial basis function
- multi layer perceptron
- pattern recognition
- genetic algorithm
- fuzzy neural network
- learning styles
- neural network model
- artificial intelligence
- learning experience
- non stationary
- rbf neural network
- input output
- high dimensional
- global exponential stability
- machine learning