MixCE: Training Autoregressive Language Models by Mixing Forward and Reverse Cross-Entropies.
Shiyue ZhangShijie WuOzan IrsoySteven LuMohit BansalMark DredzeDavid S. RosenbergPublished in: ACL (1) (2023)
Keyphrases
- language model
- autoregressive
- language modeling
- moving average
- probabilistic model
- document retrieval
- n gram
- non stationary
- speech recognition
- information retrieval
- gaussian markov random field
- random fields
- language modelling
- test collection
- statistical language models
- retrieval model
- context sensitive
- query expansion
- training set
- smoothing methods
- relevance model
- document ranking
- sar images
- translation model
- model selection
- active learning