A generalization of the maximum a posteriori training algorithm for mixture priors.
Eric R. BuhrkeChen LiuPublished in: ICASSP (2000)
Keyphrases
- training algorithm
- maximum a posteriori
- expectation maximization
- map estimation
- back propagation
- markov random field
- neural network
- image reconstruction
- maximum likelihood
- em algorithm
- bayesian framework
- prior model
- mixture model
- training process
- posterior distribution
- learning rate
- support vector machine
- energy function
- rbf neural network
- prior distribution
- learning algorithm
- artificial neural networks
- hidden layer
- edge preserving
- image segmentation
- k means
- fuzzy logic
- neural network model
- training data
- computer vision
- high quality
- control system
- data sets