Speeding Up Planning in Markov Decision Processes via Automatically Constructed Abstraction.
Alejandro IsazaCsaba SzepesváriVadim BulitkoRussell GreinerPublished in: UAI (2008)
Keyphrases
- markov decision processes
- decision theoretic planning
- macro actions
- planning under uncertainty
- partially observable
- optimal policy
- finite state
- state abstraction
- dynamic programming
- state space
- probabilistic planning
- reinforcement learning
- policy iteration
- factored mdps
- model based reinforcement learning
- finite horizon
- reinforcement learning algorithms
- transition matrices
- reachability analysis
- action space
- infinite horizon
- planning problems
- decision problems
- domain independent
- average reward
- decision processes
- markov decision process
- partially observable markov decision processes
- average cost
- decision theoretic
- learning algorithm
- decision diagrams
- risk sensitive
- least squares
- state and action spaces
- search algorithm
- action sets
- discounted reward
- goal oriented
- stochastic shortest path