Login / Signup
ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design.
Hanxue Liang
Zhiwen Fan
Rishov Sarkar
Ziyu Jiang
Tianlong Chen
Kai Zou
Yu Cheng
Cong Hao
Zhangyang Wang
Published in:
CoRR (2022)
Keyphrases
</>
theoretical analysis
multi task learning
high order
gaussian processes
similarity measure
maximum likelihood
bayesian framework
bayesian model
multitask learning