Login / Signup
Neural gradients are near-lognormal: improved quantized and sparse training.
Brian Chmiel
Liad Ben-Uri
Moran Shkolnik
Elad Hoffer
Ron Banner
Daniel Soudry
Published in:
ICLR (2021)
Keyphrases
</>
neural network
recurrent networks
sparse data
network architecture
training algorithm
training set
high dimensional
training samples
bio inspired
avoid overfitting
training process
associative memory
dictionary learning
test set
online learning
elastic net
supervised learning