Policy Consolidation for Continual Reinforcement Learning.
Christos KaplanisMurray ShanahanClaudia ClopathPublished in: ICML (2019)
Keyphrases
- reinforcement learning
- optimal policy
- policy search
- markov decision process
- action selection
- actor critic
- control policy
- partially observable environments
- policy gradient
- reward function
- markov decision processes
- partially observable domains
- policy iteration
- state space
- state and action spaces
- function approximators
- partially observable
- markov decision problems
- reinforcement learning problems
- action space
- decision problems
- reinforcement learning algorithms
- approximate dynamic programming
- continuous state
- function approximation
- model free
- infinite horizon
- multi agent
- average reward
- state action
- temporal difference
- asymptotically optimal
- partially observable markov decision processes
- finite state
- inverse reinforcement learning
- dynamic programming
- eligibility traces
- robotic control
- optimal control
- exploration exploitation tradeoff
- approximate policy iteration
- machine learning
- transition model
- policy evaluation
- control policies
- reinforcement learning methods
- average cost
- learning algorithm
- continuous state spaces
- gradient method
- control problems
- long run
- learning process
- policy gradient methods
- model free reinforcement learning