Login / Signup
Deep Neural Network Compression with Knowledge Distillation Using Cross-Layer Matrix, KL Divergence and Offline Ensemble.
Hsing-Hung Chou
Ching-Te Chiu
Yi-Ping Liao
Published in:
APSIPA (2020)
Keyphrases
</>
neural network
kl divergence
cross layer
multi layer
wireless networks
kullback leibler divergence
prior knowledge
back propagation
mahalanobis distance
learning algorithm
application layer
real time
feature selection
probability distribution
expectation maximization
routing protocol
exponential family