Marginal Utility for Planning in Continuous or Large Discrete Action Spaces.
Zaheen Farraz AhmadLevi LelisMichael BowlingPublished in: NeurIPS (2020)
Keyphrases
- continuous action
- action space
- marginal utility
- continuous state spaces
- continuous state
- action selection
- state space
- partially observable markov decision processes
- markov decision processes
- planning problems
- single agent
- reinforcement learning
- real valued
- policy search
- state and action spaces
- markov decision problems
- continuous domains
- stochastic processes
- heuristic search
- control policies
- reinforcement learning problems
- finite state
- dynamic programming
- multi agent systems
- belief state
- decision theoretic
- initial state
- markov decision process
- state variables
- motion planning
- dynamical systems
- probabilistic model