-Realizable MDPs Is as Easy as in Linear MDPs If You Learn What to Ignore.
Gellért WeiszAndrás GyörgyCsaba SzepesváriPublished in: NeurIPS (2023)
Keyphrases
- markov decision processes
- reinforcement learning
- factored mdps
- finite horizon
- optimal policy
- state space
- semi markov decision processes
- decision theoretic planning
- dynamic programming
- action sets
- factored markov decision processes
- policy search
- planning under uncertainty
- average reward
- policy iteration
- markov decision process
- function approximators
- markov decision problems
- decision diagrams
- finite state
- real time dynamic programming
- probabilistic planning
- policy evaluation
- initial state
- stochastic domains
- linear programming
- model based reinforcement learning
- continuous state and action spaces
- machine learning