Knowledge Distillation in Wide Neural Networks: Risk Bound, Data Efficiency and Imperfect Teacher.
Guangda JiZhanxing ZhuPublished in: CoRR (2020)
Keyphrases
- data sets
- neural network
- data collection
- knowledge discovery
- high quality
- raw data
- human experts
- data analysis
- synthetic data
- upper bound
- data sources
- database
- hidden knowledge
- knowledge base
- original data
- data mining techniques
- domain knowledge
- training data
- probability distribution
- image data
- genetic algorithm
- data repositories
- data quality
- artificial neural networks
- data structure
- clustering algorithm
- knowledge representation
- expert knowledge
- learning environment
- knowledge sharing
- high dimensional data
- design process
- statistical analysis
- computer systems
- xml documents