On the Use of Non-Stationary Policies for Stationary Infinite-Horizon Markov Decision Processes.
Bruno ScherrerBoris LesnerPublished in: NIPS (2012)
Keyphrases
- non stationary
- infinite horizon
- markov decision processes
- optimal policy
- finite horizon
- markov decision process
- average cost
- state space
- decision problems
- finite state
- decision processes
- stationary policies
- reinforcement learning
- reward function
- dynamic programming
- long run
- policy iteration
- partially observable markov decision processes
- total reward
- control policies
- partially observable
- lost sales
- average reward
- multistage
- single item
- policy iteration algorithm
- planning under uncertainty
- markov decision problems
- state dependent
- holding cost
- decision theoretic planning
- expected reward
- reinforcement learning algorithms
- action space
- sufficient conditions
- dec pomdps
- initial state
- machine learning