Login / Signup
Deep compressive offloading: speeding up neural network inference by trading edge computation for network latency.
Shuochao Yao
Jinyang Li
Dongxin Liu
Tianshi Wang
Shengzhong Liu
Huajie Shao
Tarek F. Abdelzaher
Published in:
SenSys (2020)
Keyphrases
</>
network latency
neural network
response time
memory requirements
distributed network
network bandwidth
image retrieval