NEO-KD: Knowledge-Distillation-Based Adversarial Training for Robust Multi-Exit Neural Networks.
Seokil HamJungwuk ParkDong-Jun HanJaekyun MoonPublished in: CoRR (2023)
Keyphrases
- neural network
- domain knowledge
- knowledge acquisition
- knowledge base
- pattern recognition
- training process
- backpropagation algorithm
- feedforward neural networks
- training algorithm
- multi agent
- training set
- artificial neural networks
- knowledge representation
- multi layer perceptron
- neural network model
- multi layer
- symbolic knowledge
- image sequences
- training phase
- self organizing maps
- design process
- data sets
- knowledge discovery
- prior knowledge