POLTER: Policy Trajectory Ensemble Regularization for Unsupervised Reinforcement Learning.
Frederik SchubertCarolin BenjaminsSebastian DöhlerBodo RosenhahnMarius LindauerPublished in: Trans. Mach. Learn. Res. (2023)
Keyphrases
- reinforcement learning
- optimal policy
- policy search
- action selection
- supervised learning
- markov decision process
- markov decision processes
- reinforcement learning problems
- unsupervised learning
- policy gradient
- function approximators
- partially observable
- action space
- state space
- reinforcement learning algorithms
- control policies
- learning algorithm
- policy evaluation
- reward function
- ensemble learning
- temporal difference
- markov decision problems
- approximate dynamic programming
- actor critic
- partially observable environments
- rl algorithms
- policy iteration
- state and action spaces
- machine learning
- model free
- ensemble methods
- function approximation
- partially observable domains
- state action
- continuous state
- neural network
- random forests
- partially observable markov decision processes
- dynamic programming
- learning problems
- semi supervised
- training set
- transition model
- continuous state spaces
- learning process
- multi agent
- infinite horizon
- transfer learning
- average reward
- control policy
- unsupervised manner
- regularization parameter
- trajectory data
- long run
- image restoration
- optimal control
- feature selection
- classifier ensemble
- policy gradient methods