Sign in
Reverse KL-Divergence Training of Prior Networks: Improved Uncertainty and Adversarial Robustness.
Andrey Malinin
Mark J. F. Gales
Published in:
NeurIPS (2019)
Keyphrases
</>
kl divergence
kullback leibler
kullback leibler divergence
mahalanobis distance
gaussian mixture
information theoretic
prior knowledge
gaussian distribution
posterior distribution
network structure
training set
supervised learning
em algorithm
dissimilarity measure