Compression of Generative Pre-trained Language Models via Quantization.
Chaofan TaoLu HouWei ZhangLifeng ShangXin JiangQun LiuPing LuoNgai WongPublished in: CoRR (2022)
Keyphrases
- language model
- pre trained
- language modeling
- probabilistic model
- training data
- n gram
- training examples
- document retrieval
- generative model
- information retrieval
- language modelling
- query expansion
- language modeling framework
- statistical language models
- speech recognition
- smoothing methods
- control signals
- test collection
- retrieval model
- document ranking
- unsupervised learning
- relevance model
- feature vectors
- high dimensional
- training set
- machine learning