Reachability in MDPs: Refining Convergence of Value Iteration.
Serge HaddadBenjamin MonmegePublished in: RP (2014)
Keyphrases
- stochastic shortest path
- markov decision processes
- state space
- policy iteration
- reinforcement learning
- markov decision process
- markov decision problems
- optimal policy
- factored mdps
- convergence rate
- heuristic search
- dynamic programming
- average reward
- stationary policies
- finite horizon
- finite state
- average cost
- partially observable
- action space
- reachability analysis
- decision theoretic planning
- algebraic decision diagrams
- reinforcement learning algorithms
- finite number
- markov chain
- continuous state spaces
- infinite horizon
- decision theoretic
- semi markov decision processes
- search space