-Realizable MDPs Is as Easy as in Linear MDPs If You Learn What to Ignore.
Gellért WeiszAndrás GyörgyCsaba SzepesváriPublished in: CoRR (2023)
Keyphrases
- markov decision processes
- reinforcement learning
- factored mdps
- state space
- optimal policy
- finite horizon
- dynamic programming
- planning under uncertainty
- policy iteration
- decision theoretic planning
- markov decision problems
- model based reinforcement learning
- average cost
- function approximators
- infinite horizon
- markov decision process
- finite state
- factored markov decision processes
- semi markov decision processes
- decision diagrams
- neural network
- partially observable
- reward function
- dec pomdps
- linear programming
- search algorithm
- real time dynamic programming
- average reward
- decision processes
- action space
- reinforcement learning algorithms