Achieving Generalizable Robustness of Deep Neural Networks by Stability Training.
Jan LaermannWojciech SamekNils StrodthoffPublished in: CoRR (2019)
Keyphrases
- neural network
- training process
- training algorithm
- feedforward neural networks
- feed forward neural networks
- multi layer perceptron
- neural network training
- backpropagation algorithm
- pattern recognition
- fuzzy logic
- error back propagation
- back propagation
- deep architectures
- neural nets
- multi layer
- computational efficiency
- genetic algorithm
- neural network model
- multilayer perceptron
- activation function
- rule extraction
- artificial intelligence
- stability analysis
- high robustness
- training samples
- recurrent networks
- machine learning
- real time
- lyapunov function
- numerical stability
- radial basis function network
- self organizing maps
- fuzzy systems
- feed forward
- closed loop