Efficient Approximate Value Iteration for Continuous Gaussian POMDPs.
Jur van den BergSachin PatilRon AlterovitzPublished in: AAAI (2012)
Keyphrases
- markov decision processes
- partially observable markov decision processes
- belief state
- partially observable markov
- belief space
- state space
- continuous state spaces
- optimal policy
- finite state
- reinforcement learning
- action space
- dynamic programming
- continuous action
- gaussian mixture model
- maximum likelihood
- dynamical systems
- markov decision process
- markov decision chains
- continuous state
- gaussian densities
- partially observable
- policy iteration
- heuristic search
- average reward
- planning under uncertainty
- decision problems
- continuous valued
- distributed constraint optimization
- neural network
- probability distribution
- approximate solutions
- multi agent
- function approximation
- planning problems
- dynamic environments