Synthesizing Policies That Account For Human Execution Errors Caused By State-Aliasing In Markov Decision Processes.
Sriram GopalakrishnanMudit VermaSubbarao KambhampatiPublished in: CoRR (2021)
Keyphrases
- markov decision processes
- optimal policy
- state space
- markov decision process
- total reward
- action space
- discounted reward
- reinforcement learning
- decision processes
- finite state
- average cost
- partially observable
- discount factor
- decision theoretic planning
- reward function
- dynamic programming
- decision problems
- temporally extended
- state abstraction
- control policies
- reinforcement learning algorithms
- transition matrices
- markov decision problems
- finite horizon
- expected reward
- decentralized control
- state and action spaces
- planning under uncertainty
- factored mdps
- real time dynamic programming
- average reward
- heuristic search
- infinite horizon
- risk sensitive
- multistage
- reachability analysis
- partially observable markov decision processes
- state variables
- probabilistic planning
- learning algorithm
- policy iteration
- macro actions
- action sets
- initial state
- dynamical systems
- model based reinforcement learning