Speeding Up Planning in Markov Decision Processes via Automatically Constructed Abstractions
Alejandro IsazaCsaba SzepesváriVadim BulitkoRussell GreinerPublished in: CoRR (2012)
Keyphrases
- markov decision processes
- macro actions
- planning under uncertainty
- partially observable
- decision theoretic planning
- state space
- finite state
- reinforcement learning
- optimal policy
- planning problems
- partially observable markov decision processes
- policy iteration
- transition matrices
- dynamic programming
- heuristic search
- reachability analysis
- probabilistic planning
- reward function
- reinforcement learning algorithms
- factored mdps
- decision processes
- finite horizon
- action space
- average reward
- infinite horizon
- state and action spaces
- markov decision process
- model based reinforcement learning
- markov decision problems
- machine learning
- action sets
- semi markov decision processes
- discounted reward
- interval estimation
- least squares
- risk sensitive
- decision makers
- average cost
- decision problems
- multiple agents