IVQ: In-Memory Acceleration of DNN Inference Exploiting Varied Quantization.
Fangxin LiuWenbo ZhaoZongwu WangYilong ZhaoTao YangYiran ChenLi JiangPublished in: IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. (2022)
Keyphrases
- memory requirements
- probabilistic inference
- bayesian inference
- inference process
- bayesian networks
- memory space
- belief networks
- computing power
- neural network
- memory usage
- dynamic bayesian networks
- inference engine
- random access
- efficient learning
- structured prediction
- feature vectors
- real time
- inference mechanism
- memory size
- quantization step