Compression of Generative Pre-trained Language Models via Quantization.
Chaofan TaoLu HouWei ZhangLifeng ShangXin JiangQun LiuPing LuoNgai WongPublished in: ACL (1) (2022)
Keyphrases
- language model
- pre trained
- language modeling
- training data
- probabilistic model
- training examples
- n gram
- generative model
- document retrieval
- test collection
- information retrieval
- language modeling framework
- language modelling
- statistical language models
- query expansion
- retrieval model
- control signals
- speech recognition
- document ranking
- relevance model
- smoothing methods
- data sets
- topic models
- active learning
- appearance variations
- machine learning