LCRL: Certified Policy Synthesis via Logically-Constrained Reinforcement Learning.
Mohammadhosein HasanbeigDaniel KroeningAlessandro AbatePublished in: QEST (2022)
Keyphrases
- reinforcement learning
- optimal policy
- policy search
- markov decision process
- action selection
- function approximators
- function approximation
- control policy
- partially observable
- markov decision processes
- reinforcement learning problems
- reward function
- policy evaluation
- action space
- state space
- state and action spaces
- actor critic
- control policies
- state action
- markov decision problems
- policy gradient
- model free reinforcement learning
- partially observable environments
- reinforcement learning algorithms
- model free
- policy iteration
- program synthesis
- texture synthesis
- continuous state
- partially observable domains
- temporal difference
- decision problems
- finite state
- learning algorithm
- multi agent
- continuous state spaces
- approximate dynamic programming
- reinforcement learning methods
- policy gradient methods
- partially observable markov decision processes
- learning problems
- long run
- transition model
- average reward
- transfer learning
- agent learns
- allocation policy
- asymptotically optimal
- natural actor critic
- agent receives
- neural network
- robotic control
- optimal control
- learning classifier systems
- temporal difference learning
- state dependent