Reinforcement Learning with State Observation Costs in Action-Contingent Noiselessly Observable Markov Decision Processes.
Hyunji Alex NamScott L. FlemingEmma BrunskillPublished in: NeurIPS (2021)
Keyphrases
- markov decision processes
- action space
- reinforcement learning
- state space
- discounted reward
- optimal policy
- average cost
- state action
- state and action spaces
- markov decision process
- state abstraction
- finite state
- reinforcement learning algorithms
- action sets
- decision theoretic planning
- dynamic programming
- finite horizon
- average reward
- total reward
- reachability analysis
- policy iteration
- continuous state
- reward function
- partially observable
- infinite horizon
- real time dynamic programming
- transition model
- transition matrices
- initial state
- action selection
- continuous state spaces
- state variables
- function approximation
- markov chain
- model based reinforcement learning
- learning algorithm
- real valued
- factored mdps
- expected reward
- machine learning