On the Complexity of Finite Memory Policies for Markov Decision Processes.
Danièle BeauquierDima BuragoAnatol SlissenkoPublished in: MFCS (1995)
Keyphrases
- markov decision processes
- optimal policy
- state and action spaces
- decision problems
- stationary policies
- markov decision process
- average cost
- decision processes
- state space
- reinforcement learning
- reward function
- finite state
- average reward
- dynamic programming
- policy iteration
- markov decision problems
- total reward
- infinite horizon
- decision theoretic planning
- finite horizon
- policy iteration algorithm
- action space
- decentralized control
- transition matrices
- reachability analysis
- risk sensitive
- partially observable markov decision processes
- reinforcement learning algorithms
- partially observable
- model based reinforcement learning
- discounted reward
- macro actions
- control policies
- planning under uncertainty
- long run
- expected reward
- sufficient conditions
- state abstraction
- linear programming
- action sets
- computational complexity
- partially observable markov decision process
- machine learning
- multistage
- finite number
- real time dynamic programming
- initial state