Learning Policies in Partially Observable MDPs with Abstract Actions Using Value Iteration.
Hamed JanzadehManfred HuberPublished in: FLAIRS Conference (2013)
Keyphrases
- partially observable
- markov decision processes
- state space
- reinforcement learning
- action models
- markov decision problems
- reward function
- optimal policy
- infinite horizon
- partial observations
- decision problems
- partially observable markov decision processes
- belief space
- belief state
- dynamical systems
- markov decision process
- discount factor
- partially observable environments
- partial observability
- partially observable domains
- inverse reinforcement learning
- partially observable markov decision process
- dynamic programming
- heuristic search
- policy iteration
- average cost
- decision processes
- action space
- domain specific
- np hard
- knowledge base