POMDP and Hierarchical Options MDP with Continuous Actions for Autonomous Driving at Intersections.
Zhiqian QiaoKatharina MuellingJohn M. DolanPraveen PalanisamyPriyantha MudaligePublished in: ITSC (2018)
Keyphrases
- autonomous driving
- action space
- partially observable
- partially observable markov decision process
- markov decision processes
- state and action spaces
- reward function
- state space
- markov decision process
- continuous state
- partially observable markov decision processes
- continuous action
- reinforcement learning
- decision theoretic
- finite state
- grand challenge
- markov decision problems
- optimal policy
- continuous state spaces
- partial observability
- dynamical systems
- stereo vision
- belief state
- initial state
- decision theoretic planning
- planning under uncertainty
- state action
- infinite horizon
- decision problems
- multiple agents
- policy iteration
- real time
- dec pomdps
- markov chain
- vision algorithms
- action selection
- vision system
- traffic light
- single agent
- machine learning