Warming-up recurrent neural networks to maximize reachable multi-stability greatly improves learning.
Nicolas VecovenDamien ErnstGuillaume DrionPublished in: CoRR (2021)
Keyphrases
- recurrent neural networks
- recurrent networks
- learning algorithm
- knowledge acquisition
- long short term memory
- reinforcement learning
- learning process
- supervised learning
- learning systems
- cascade correlation
- machine learning
- learning tasks
- feed forward
- incremental learning
- feedforward neural networks
- neural network
- hidden layer
- prior knowledge
- artificial neural networks