How to Explore with Belief: State Entropy Maximization in POMDPs.
Riccardo ZamboniDuilio CirinoMarcello RestelliMirco MuttiPublished in: CoRR (2024)
Keyphrases
- belief state
- state space
- partially observable markov decision processes
- belief space
- belief revision
- partial observability
- partially observable
- point based value iteration
- approximation methods
- partial knowledge
- dynamic bayesian networks
- objective function
- partially observable markov decision process
- regression model
- dynamic environments
- reactive planning
- image segmentation