• search
    search
  • reviewers
    reviewers
  • feeds
    feeds
  • assignments
    assignments
  • settings
  • logout

Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention at Vision Transformer Inference.

Haoran YouYunyang XiongXiaoliang DaiBichen WuPeizhao ZhangHaoqi FanPeter VajdaYingyan Celine Lin
Published in: CVPR (2023)
Keyphrases