Is Approximation Universally Defensive Against Adversarial Attacks in Deep Neural Networks?
Ayesha SiddiqueKhaza Anuarul HoquePublished in: CoRR (2021)
Keyphrases
- neural network
- artificial neural networks
- pattern recognition
- countermeasures
- closed form
- error bounds
- training process
- approximation algorithms
- fuzzy logic
- back propagation
- neural nets
- genetic algorithm
- security threats
- approximation error
- deep learning
- fuzzy systems
- recurrent neural networks
- network security
- feed forward
- multi layer perceptron
- rule extraction
- neural network model
- fault diagnosis
- multi agent
- traffic analysis
- security risks
- terrorist attacks