Computationally Efficient Horizon-Free Reinforcement Learning for Linear Mixture MDPs.
Dongruo ZhouQuanquan GuPublished in: CoRR (2022)
Keyphrases
- reinforcement learning
- computationally efficient
- markov decision processes
- state space
- optimal policy
- reinforcement learning algorithms
- learning algorithm
- function approximation
- state and action spaces
- function approximators
- markov decision process
- partially observable
- policy search
- temporal difference
- model free
- continuous state and action spaces
- dynamic programming
- machine learning
- action selection
- action space
- approximate dynamic programming
- model based reinforcement learning
- factored mdps
- reward function
- mixture model
- control problems
- reinforcement learning methods
- planning under uncertainty
- finite state
- continuous state spaces
- finite horizon
- reinforcement learning problems
- policy iteration
- computational complexity
- real valued
- gaussian densities