Slow manifolds in recurrent networks encode working memory efficiently and robustly.
Elham GhazizadehShiNung ChingPublished in: CoRR (2021)
Keyphrases
- working memory
- recurrent networks
- cognitive load
- recurrent neural networks
- computational model
- cognitive architecture
- information processing
- focus of attention
- long term memory
- short term memory
- feed forward
- working memory capacity
- individual differences
- collaborative learning
- mobile robot
- neural network
- learning outcomes
- learning process
- data analysis
- database systems
- computational modeling
- artificial intelligence
- databases
- real time