An Information-Theoretic Explanation for the Adversarial Fragility of AI Classifiers.
Hui XieJirong YiWeiyu XuRaghu MudumbaiPublished in: CoRR (2019)
Keyphrases
- information theoretic
- mutual information
- information theory
- theoretic framework
- decision trees
- feature selection
- information bottleneck
- jensen shannon divergence
- support vector
- entropy measure
- minimum description length
- training data
- training set
- log likelihood
- kullback leibler divergence
- computational learning theory
- multi modality
- training samples
- machine learning
- kl divergence
- naive bayes
- information theoretic measures
- loss function
- k nearest neighbor
- feature set
- nearest neighbor
- learning algorithm