Reward-Free RL is No Harder Than Reward-Aware RL in Linear Markov Decision Processes.
Andrew J. WagenmakerYifang ChenMax SimchowitzSimon S. DuKevin G. JamiesonPublished in: ICML (2022)
Keyphrases
- markov decision processes
- reinforcement learning
- average reward
- reward function
- discounted reward
- total reward
- optimal policy
- state space
- reinforcement learning algorithms
- expected reward
- policy iteration
- finite state
- state and action spaces
- dynamic programming
- action space
- model free
- rl algorithms
- semi markov decision processes
- markov decision process
- decision theoretic planning
- transition matrices
- average cost
- stochastic games
- state action
- action sets
- reachability analysis
- actor critic
- learning algorithm
- function approximation
- factored mdps
- partially observable
- model based reinforcement learning
- policy gradient
- stationary policies
- machine learning
- continuous state
- learning agent
- finite horizon
- partially observable markov decision processes
- optimal control
- planning under uncertainty
- temporal difference
- long run