FPTQ: Fine-grained Post-Training Quantization for Large Language Models.
Qingyuan LiYifan ZhangLiang LiPeng YaoBo ZhangXiangxiang ChuYerui SunLi DuYuchen XiePublished in: CoRR (2023)
Keyphrases
- fine grained
- language model
- coarse grained
- language modeling
- n gram
- probabilistic model
- document retrieval
- access control
- speech recognition
- language modelling
- retrieval model
- query expansion
- context sensitive
- information retrieval
- statistical language models
- test collection
- relevance model
- vector space model
- query terms
- smoothing methods
- language models for information retrieval
- pseudo relevance feedback
- translation model
- data lineage