Knowledge Distillation in Wide Neural Networks: Risk Bound, Data Efficiency and Imperfect Teacher.
Guangda JiZhanxing ZhuPublished in: NeurIPS (2020)
Keyphrases
- neural network
- data sets
- statistical analysis
- original data
- data mining tools
- high quality
- data quality
- data processing
- missing data
- prior knowledge
- raw data
- enormous amounts
- expert knowledge
- background knowledge
- domain experts
- human subjects
- synthetic data
- high dimensional data
- data collection
- hidden knowledge
- data mining techniques
- data mining
- training data
- artificial neural networks
- knowledge representation
- knowledge discovery
- image data
- data points
- data sources
- xml documents
- pattern recognition
- data structure
- upper bound
- information resources
- knowledge base
- decision making
- input data