Forging Multiple Training Objectives for Pre-trained Language Models via Meta-Learning.
Hongqiu WuRuixue DingHai ZhaoBoli ChenPengjun XieFei HuangMin ZhangPublished in: CoRR (2022)
Keyphrases
- language model
- meta learning
- pre trained
- language modeling
- speech recognition
- probabilistic model
- learning tasks
- inductive learning
- training examples
- information retrieval
- decision trees
- model selection
- training set
- machine learning algorithms
- machine learning
- active learning
- text classification
- training samples
- feature space
- natural language
- training data
- feature extraction
- smoothing methods