POLTER: Policy Trajectory Ensemble Regularization for Unsupervised Reinforcement Learning.
Frederik SchubertCarolin BenjaminsSebastian DöhlerBodo RosenhahnMarius LindauerPublished in: CoRR (2022)
Keyphrases
- reinforcement learning
- optimal policy
- policy search
- markov decision process
- action selection
- reinforcement learning problems
- state space
- reinforcement learning algorithms
- policy gradient
- unsupervised learning
- supervised learning
- learning algorithm
- actor critic
- function approximators
- partially observable
- partially observable environments
- average reward
- policy iteration
- markov decision processes
- state and action spaces
- markov decision problems
- control policy
- reward function
- action space
- semi supervised
- partially observable markov decision processes
- function approximation
- control policies
- state action
- continuous state spaces
- approximate dynamic programming
- ensemble methods
- model free
- rl algorithms
- partially observable domains
- training data
- machine learning
- continuous state
- policy evaluation
- exploration exploitation tradeoff
- neural network
- feature selection
- temporal difference
- infinite horizon
- long run
- ensemble learning
- trajectory data
- dynamic programming
- policy gradient methods
- multi agent
- base classifiers
- agent learns
- learning process
- image restoration
- feature space
- training set
- inverse reinforcement learning
- least squares
- learning problems
- optimal control
- random forests
- regularization parameter
- ensemble classifier