On Integrating POMDP and Scenario MPC for Planning under Uncertainty - with Applications to Highway Driving.
Carl Hynén UlfsjööDaniel AxehillPublished in: IV (2022)
Keyphrases
- planning under uncertainty
- partially observable markov decision processes
- belief space
- decision theoretic
- markov decision processes
- ai planning
- dynamical systems
- partially observable markov decision process
- dec pomdps
- robotic tasks
- finite state
- multi agent
- optimal policy
- traffic accidents
- probabilistic planning
- decision theoretic planning
- reinforcement learning
- belief state
- learning algorithm
- decision problems
- dynamic programming
- decision making