Achieving Generalizable Robustness of Deep Neural Networks by Stability Training.
Jan LaermannWojciech SamekNils StrodthoffPublished in: GCPR (2019)
Keyphrases
- neural network
- training process
- training algorithm
- feed forward neural networks
- feedforward neural networks
- multi layer perceptron
- back propagation
- backpropagation algorithm
- pattern recognition
- multi layer
- error back propagation
- training patterns
- computational efficiency
- deep architectures
- neural network training
- recurrent neural networks
- training examples
- fuzzy logic
- machine learning
- radial basis function network
- training phase
- fault diagnosis
- training samples
- online learning
- activation function
- learning rules
- hidden layer
- training data
- artificial intelligence
- lyapunov function
- genetic algorithm
- feed forward
- neural network structure
- neural network model
- self organizing maps