Login / Signup
MXQN: Mixed quantization for reducing bit-width of weights and activations in deep convolutional neural networks.
Chenglong Huang
Puguang Liu
Liang Fang
Published in:
Appl. Intell. (2021)
Keyphrases
</>
convolutional neural networks
uniform quantization
convolutional network
linear combination
adaptive quantization
relative importance
weighted sum
quantization scheme
weighting scheme
successive approximation
motion estimation
wavelet transform
vector quantization
subband
lookup table