Solving Uncertain MDPs by Reusing State Information and Plans.
Ping HouWilliam YeohTran Cao SonPublished in: AAAI (2014)
Keyphrases
- state information
- state space
- action models
- markov decision problems
- markov decision processes
- action space
- partially observable
- reinforcement learning
- planning problems
- factored mdps
- initial state
- orders of magnitude
- markov chain
- planning domains
- heuristic search
- real valued
- infinite horizon
- belief state
- mobile robot
- dynamic programming
- active learning
- multi agent systems
- optimal solution
- decision making