How to use KL-divergence to construct conjugate priors, with well-defined non-informative limits, for the multivariate Gaussian.
Niko BrümmerPublished in: CoRR (2021)
Keyphrases
- kl divergence
- gaussian distribution
- kullback leibler divergence
- mahalanobis distance
- information theoretic
- posterior distribution
- bregman divergences
- maximum likelihood
- gaussian mixture
- feature extraction
- pairwise
- active learning
- probability distribution
- higher order
- probability density function
- dissimilarity measure