Simultaneously Optimizing Weight and Quantizer of Ternary Neural Network Using Truncated Gaussian Approximation.
Zhezhi HeDeliang FanPublished in: CVPR (2019)
Keyphrases
- neural network
- weight function
- gaussian convolution
- taylor series
- linear computational complexity
- vector quantization
- back propagation
- series expansion
- error bounds
- artificial neural networks
- image compression
- taylor series expansion
- weight update
- polynomial approximation
- genetic algorithm
- quantization scheme
- coding scheme
- fuzzy neural network
- akaike information criterion
- multiresolution
- maximum likelihood
- self organizing maps
- gaussian mixture model
- synaptic weights
- simultaneous optimization
- neural network is trained
- neural network model
- feed forward
- quantization error
- nonlinear functions
- hidden layer
- multilayer perceptron
- expectation propagation
- distortion measure
- locally adaptive
- computational complexity