• search
    search
  • reviewers
    reviewers
  • feeds
    feeds
  • assignments
    assignments
  • settings
  • logout

MPress: Democratizing Billion-Scale Model Training on Multi-GPU Servers via Memory-Saving Inter-Operator Parallelism.

Quan ZhouHaiquan WangXiaoyan YuCheng LiYouhui BaiFeng YanYinlong Xu
Published in: HPCA (2023)
Keyphrases