Attacks Which Do Not Kill Training Make Adversarial Learning Stronger.
Jingfeng ZhangXilie XuBo HanGang NiuLizhen CuiMasashi SugiyamaMohan S. KankanhalliPublished in: ICML (2020)
Keyphrases
- learning algorithm
- supervised learning
- feedforward neural networks
- learning process
- learning systems
- neural network
- data sets
- reinforcement learning
- prior knowledge
- online learning
- deep architectures
- recurrent networks
- structured prediction
- learning tasks
- training samples
- access control
- web applications
- active learning
- training data