Interpolated Adversarial Training: Achieving robust neural networks without sacrificing too much accuracy.
Alex LambVikas VermaKenji KawaguchiAlexander MatyaskoSavya KhoslaJuho KannalaYoshua BengioPublished in: Neural Networks (2022)
Keyphrases
- neural network
- training algorithm
- training process
- high accuracy
- error rate
- highly accurate
- prediction accuracy
- pattern recognition
- online learning
- neural network training
- recurrent networks
- feed forward neural networks
- multi layer
- multi agent
- e learning
- sufficiently accurate
- supervised learning
- computational cost
- multi layer perceptron
- feedforward neural networks
- genetic algorithm
- training speed
- recurrent neural networks
- back propagation
- classification accuracy
- training set
- fuzzy systems
- training samples
- computationally efficient
- low resolution
- support vector
- neural network structure
- decision trees
- error back propagation