Benefits of combining dimensional attention and working memory for partially observable reinforcement learning problems.
Ngozi OmatuJoshua L. PhillipsPublished in: ACM Southeast Conference (2021)
Keyphrases
- reinforcement learning problems
- partially observable
- working memory
- markov decision problems
- focus of attention
- reinforcement learning
- reinforcement learning algorithms
- markov decision processes
- state space
- cognitive load
- computational model
- dynamical systems
- decision problems
- policy iteration
- information processing
- cognitive architecture
- visual attention
- infinite horizon
- optimal policy
- linear programming
- reward function
- function approximation
- decision theoretic
- belief state
- function approximators
- utility function
- markov decision process
- dynamic programming
- decision processes
- multi agent
- transition probabilities
- model free
- temporal difference
- knowledge base
- expected utility
- eye movements
- supervised learning
- supply chain
- bayesian networks