Faster Attention Is What You Need: A Fast Self-Attention Neural Network Backbone Architecture for the Edge via Double-Condensing Attention Condensers.
Alexander WongMohammad Javad ShafieeSaad AbbasiSaeejith NairMahmoud FamouriPublished in: CoRR (2022)