Tutorial on Sampling-based POMDP-planning for Automated Driving.
Henrik BeyMaximilian TratzMoritz SackmannAlexander LangeJörn ThieleckePublished in: VEHITS (2020)
Keyphrases
- motion planning
- belief space
- planning problems
- partially observable markov decision processes
- partially observable
- belief state
- partially observable markov decision process
- reinforcement learning
- partial observability
- finite state
- semi automated
- optimal policy
- heuristic search
- continuous state
- markov decision process
- composition of web services
- dynamic programming
- markov decision processes
- dynamical systems
- state space
- ai planning
- plan generation
- planning under uncertainty
- domain independent
- mobile robot
- stochastic domains
- planning process
- optimal planning
- markov decision problems
- decision makers
- predictive state representations
- partially observable stochastic domains