Login / Signup

MPress: Democratizing Billion-Scale Model Training on Multi-GPU Servers via Memory-Saving Inter-Operator Parallelism.

Quan ZhouHaiquan WangXiaoyan YuCheng LiYouhui BaiFeng YanYinlong Xu
Published in: HPCA (2023)
Keyphrases