Information Geometry of Generalized Bayesian Prediction Using α-Divergences as Loss Functions.
Fode ZhangYimin ShiHon Keung Tony NgRuibing WangPublished in: IEEE Trans. Inf. Theory (2018)
Keyphrases
- bregman divergences
- information geometry
- loss function
- maximum entropy
- cost sensitive
- information theoretic
- pairwise
- prediction accuracy
- learning theory
- boosting algorithms
- mahalanobis distance
- exponential family
- support vector
- special case
- nearest neighbor
- kl divergence
- theoretical guarantees
- information theory
- data dependent
- fisher information
- negative matrix factorization
- reproducing kernel hilbert space