One Policy is Enough: Parallel Exploration with a Single Policy is Minimax Optimal for Reward-Free Reinforcement Learning.
Pedro Cisneros-VelardeBoxiang LyuSanmi KoyejoMladen KolarPublished in: CoRR (2022)
Keyphrases
- reinforcement learning
- optimal policy
- control policy
- action selection
- total reward
- average reward
- policy search
- reward function
- control policies
- partially observable environments
- markov decision processes
- markov decision process
- policy gradient
- finite horizon
- state dependent
- optimal control
- state space
- asymptotically optimal
- discounted reward
- expected reward
- state action
- partially observable
- dynamic programming
- policy iteration
- markov decision problems
- reinforcement learning algorithms
- agent learns
- state and action spaces
- exploration exploitation tradeoff
- long run
- function approximation
- learning algorithm
- model based reinforcement learning
- active exploration
- eligibility traces
- function approximators
- reinforcement learning problems
- inverse reinforcement learning
- infinite horizon
- approximate dynamic programming
- model free
- temporal difference
- rl algorithms
- decision problems
- allocation policy
- mobile robot
- lower bound
- expected cost
- optimal solution
- partially observable domains
- action space