Convergence of gradient method for a fully recurrent neural network.
Dongpo XuZhengxue LiWei WuPublished in: Soft Comput. (2010)
Keyphrases
- recurrent neural networks
- gradient method
- convergence rate
- step size
- neural network
- log likelihood function
- convergence speed
- reservoir computing
- recurrent networks
- feed forward
- learning rate
- convex formulation
- hidden layer
- negative matrix factorization
- echo state networks
- artificial neural networks
- complex valued
- optimization methods
- control system
- multiscale