Characterizing Policy Divergence for Personalized Meta-Reinforcement Learning.
Michael ZhangPublished in: CoRR (2020)
Keyphrases
- reinforcement learning
- optimal policy
- policy search
- markov decision process
- action selection
- function approximators
- function approximation
- reward function
- partially observable environments
- control policy
- partially observable
- actor critic
- markov decision processes
- action space
- state and action spaces
- policy iteration
- state space
- policy gradient
- adaptive learning
- control policies
- policy evaluation
- reinforcement learning problems
- partially observable markov decision processes
- average reward
- markov decision problems
- reinforcement learning algorithms
- decision problems
- dynamic programming
- continuous state
- continuous state spaces
- partially observable domains
- rl algorithms
- state action
- inverse reinforcement learning
- machine learning
- e learning
- model free
- infinite horizon
- supervised learning
- optimal control
- finite state
- personalized recommendation
- model free reinforcement learning
- policy gradient methods
- meta level
- temporal difference learning
- long run
- approximate dynamic programming
- user model
- learning problems
- transition model
- user profiles
- learning environment
- agent learns
- multi agent
- neural network
- relative entropy
- individual user