Policy invariance under reward transformations for multi-objective reinforcement learning.
Patrick MannionSam DevlinKarl MasonJim DugganEnda HowleyPublished in: Neurocomputing (2017)
Keyphrases
- multi objective
- reinforcement learning
- optimal policy
- reward function
- partially observable environments
- policy gradient
- total reward
- policy search
- evolutionary algorithm
- action selection
- average reward
- reinforcement learning algorithms
- state action
- image transformations
- markov decision process
- eligibility traces
- multi objective optimization
- markov decision problems
- state space
- inverse reinforcement learning
- control policies
- control policy
- actor critic
- markov decision processes
- function approximators
- optimization algorithm
- function approximation
- agent learns
- partially observable
- policy iteration
- state and action spaces
- genetic algorithm
- policy evaluation
- partially observable markov decision processes
- rl algorithms
- multiple objectives
- particle swarm optimization
- reinforcement learning problems
- approximate dynamic programming
- multi objective optimization problems
- action space
- model free
- discounted reward
- decision problems
- agent receives
- expected reward
- temporal difference
- pareto optimal
- long run
- nsga ii
- dynamic programming
- objective function
- partially observable domains
- policy gradient methods
- continuous state spaces
- continuous state
- learning agent
- finite horizon
- conflicting objectives
- reinforcement learning methods
- infinite horizon
- finite state
- trade off
- multi objective evolutionary
- reward signal
- machine learning
- transition model
- optimal control
- decision makers
- multi agent