On the Convergence Rate of Training Recurrent Neural Networks.
Zeyuan Allen-ZhuYuanzhi LiZhao SongPublished in: NeurIPS (2019)
Keyphrases
- convergence rate
- recurrent neural networks
- recurrent networks
- echo state networks
- feedforward neural networks
- step size
- learning rate
- neural network
- feed forward
- convergence speed
- complex valued
- feed forward neural networks
- reservoir computing
- hidden layer
- mutation operator
- neural model
- global convergence
- cascade correlation
- gradient method
- wavelet neural network
- training algorithm
- neuro fuzzy
- nonlinear dynamic systems
- numerical stability
- artificial neural networks
- variable step size
- gauss seidel method
- multi objective
- learning algorithm
- genetic algorithm