Markov decision processes with noise-corrupted and delayed state observations.
James L. BanderChelsea C. White IIIPublished in: J. Oper. Res. Soc. (1999)
Keyphrases
- markov decision processes
- state space
- optimal policy
- finite state
- action space
- partially observable
- reinforcement learning
- dynamic programming
- real time dynamic programming
- transition matrices
- reinforcement learning algorithms
- reachability analysis
- markov decision process
- policy iteration
- finite horizon
- state abstraction
- decision theoretic planning
- planning under uncertainty
- risk sensitive
- infinite horizon
- discounted reward
- decision processes
- action sets
- probabilistic planning
- total reward
- model based reinforcement learning
- factored mdps
- state and action spaces
- reward function
- state variables
- average reward
- markov chain
- model checking