Knowledge Distillation as Efficient Pre-training: Faster Convergence, Higher Data-efficiency, and Better Transferability.
Ruifei HeShuyang SunJihan YangSong BaiXiaojuan QiPublished in: CVPR (2022)
Keyphrases
- faster convergence
- data sets
- raw data
- knowledge discovery
- domain knowledge
- prior knowledge
- data sources
- knowledge representation
- domain experts
- data processing
- data mining techniques
- database
- data quality
- data analysis
- image quality
- data structure
- knowledge base
- labelled data
- data mining tools
- spatial data
- background knowledge
- missing data
- training examples
- data collection
- input data
- knowledge management
- small number
- image data
- data points
- query processing
- training set