Login / Signup

Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity.

Haojun XiaZhen ZhengYuchao LiDonglin ZhuangZhongzhu ZhouXiafei QiuYong LiWei LinShuaiwen Leon Song
Published in: CoRR (2023)
Keyphrases