Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self-Training Approach.
Yue YuSimiao ZuoHaoming JiangWendi RenTuo ZhaoChao ZhangPublished in: NAACL-HLT (2021)
Keyphrases
- fine tuning
- language model
- pre trained
- language modeling
- training data
- training set
- n gram
- training examples
- document retrieval
- probabilistic model
- speech recognition
- retrieval model
- control signals
- information retrieval
- context sensitive
- cost sensitive
- test collection
- ad hoc information retrieval
- least squares
- semi supervised learning
- query expansion
- smoothing methods
- computer vision
- translation model
- mixture model
- cross lingual
- relevance model
- semi supervised
- learning algorithm