On the Convergence Rate of Training Recurrent Neural Networks.
Zeyuan Allen-ZhuYuanzhi LiZhao SongPublished in: CoRR (2018)
Keyphrases
- convergence rate
- recurrent neural networks
- echo state networks
- recurrent networks
- feedforward neural networks
- convergence speed
- neural network
- learning rate
- feed forward
- step size
- global convergence
- hidden layer
- gradient method
- complex valued
- reservoir computing
- wavelet neural network
- neural model
- feed forward neural networks
- artificial neural networks
- training algorithm
- numerical stability
- neuro fuzzy
- mutation operator
- multi objective
- cascade correlation
- nonlinear dynamic systems
- stochastic gradient descent
- variable step size