Forging Multiple Training Objectives for Pre-trained Language Models via Meta-Learning.
Hongqiu WuRuixue DingHai ZhaoBoli ChenPengjun XieFei HuangMin ZhangPublished in: EMNLP (Findings) (2022)
Keyphrases
- language model
- meta learning
- pre trained
- language modeling
- probabilistic model
- inductive learning
- learning tasks
- information retrieval
- decision trees
- speech recognition
- training data
- training examples
- smoothing methods
- hidden markov models
- graph cuts
- model selection
- machine learning algorithms
- test data
- recommender systems
- training set
- base classifiers
- data mining