Mitigating sampling bias in risk-based active learning via an EM algorithm.
Aidan J. HughesLawrence A. BullPaul GardnerNikolaos DervilisKeith WordenPublished in: CoRR (2022)
Keyphrases
- em algorithm
- active learning
- random sampling
- risk management
- expectation maximization
- imbalanced class distribution
- mixture model
- maximum likelihood
- gaussian mixture model
- parameter estimation
- generative model
- maximum likelihood estimation
- hyperparameters
- likelihood function
- log likelihood
- gaussian mixture
- expectation maximisation
- sampling methods
- machine learning
- probability density function
- incomplete data
- labeled data
- semi supervised
- supervised learning
- class imbalance
- training examples
- finite mixture model
- hidden variables
- log likelihood function
- training set
- gibbs sampling
- sample size
- semi supervised learning
- maximum a posteriori
- penalized likelihood
- matrix factorisation
- markov chain monte carlo
- density estimation
- unsupervised learning
- graphical models
- probability distribution
- k means
- computer vision