Login / Signup
n-hot: Efficient bit-level sparsity for powers-of-two neural network quantization.
Yuiko Sakuma
Hiroshi Sumihiro
Jun Nishikawa
Toshiki Nakamura
Ryoji Ikegaya
Published in:
CoRR (2021)
Keyphrases
</>
neural network
multiscale
artificial neural networks
pattern recognition
high dimensional
cost effective
genetic algorithm
back propagation
fault diagnosis
neural nets
adaptive quantization
higher level
lightweight
bp neural network
learning vector quantization