Geometric Active Exploration in Markov Decision Processes: the Benefit of Abstraction.
Riccardo De SantiFederico Arangath JosephNoah LinigerMirco MuttiAndreas KrausePublished in: CoRR (2024)
Keyphrases
- markov decision processes
- active exploration
- decision theoretic planning
- reinforcement learning
- state abstraction
- state space
- finite state
- active learning
- optimal policy
- transition matrices
- problem based learning
- dynamic programming
- planning under uncertainty
- small sample
- policy iteration
- average cost
- infinite horizon
- reachability analysis
- partially observable
- markov decision process
- average reward
- action sets
- reward function
- markov decision problems
- action space
- game playing
- data mining
- programming language
- probabilistic planning
- long run