Hierarchy through Composition with Linearly Solvable Markov Decision Processes.
Andrew M. SaxeAdam Christopher EarleBenjamin RosmanPublished in: CoRR (2016)
Keyphrases
- markov decision processes
- finite state
- reinforcement learning
- state space
- special case
- dynamic programming
- decision processes
- optimal policy
- np complete
- policy iteration
- reachability analysis
- decision theoretic planning
- reinforcement learning algorithms
- planning under uncertainty
- average reward
- model based reinforcement learning
- np hard
- partially observable
- web service composition
- transition matrices
- action space
- computational complexity
- markov decision process
- finite horizon
- factored mdps
- average cost
- risk sensitive
- reward function
- infinite horizon
- function approximation
- state and action spaces
- action sets
- discounted reward
- linear programming