On the Convergence of AdaGrad with Momentum for Training Deep Neural Networks.
Fangyu ZouLi ShenPublished in: CoRR (2018)
Keyphrases
- neural network
- training process
- training algorithm
- learning rate
- feed forward neural networks
- backpropagation algorithm
- feedforward neural networks
- convergence rate
- weight update
- multi layer perceptron
- genetic algorithm
- neural network training
- training patterns
- back propagation
- pattern recognition
- training set
- training phase
- hidden layer
- convergence speed
- fuzzy logic
- multi layer
- error back propagation
- online training
- fuzzy systems
- online learning
- activation function
- deep architectures
- multilayer neural network
- training data
- training examples
- neural network model
- training samples
- feed forward
- artificial neural networks
- multi objective
- support vector machine