A Nearly Optimal and Agnostic Algorithm for Properly Learning a Mixture of k Gaussians, for any Constant k.
Jerry Zheng LiLudwig SchmidtPublished in: CoRR (2015)
Keyphrases
- learning algorithm
- dynamic programming
- expectation maximization
- computational cost
- learning process
- learning speed
- incremental learning
- detection algorithm
- optimal solution
- neural network
- learning tasks
- matching algorithm
- optimization algorithm
- active learning
- learning phase
- computational complexity
- globally optimal
- probabilistic model
- worst case
- preprocessing
- em algorithm
- supervised learning
- np hard
- mixture model
- particle swarm optimization
- objective function
- online learning
- similarity measure
- gaussian mixture
- learning environment
- optimal parameters
- k means