Login / Signup
Mixed-Precision Inference Quantization: Radically Towards Faster inference speed, Lower Storage requirement, and Lower Loss.
Daning Cheng
Wenguang Chen
Published in:
CoRR (2022)
Keyphrases
</>
storage requirements
inference engine
bayesian networks
probabilistic inference
high speed
computational complexity
precision and recall
belief networks
data sets
information retrieval
computer vision
motion estimation
vector quantization
memory efficient
grammatical inference