Optimizing Learning Rate Schedules for Iterative Pruning of Deep Neural Networks.
Shiyu LiuRohan GhoshChong Min John TanMehul MotaniPublished in: Trans. Mach. Learn. Res. (2023)
Keyphrases
- learning rate
- neural network
- activation function
- hidden layer
- backpropagation algorithm
- training algorithm
- convergence rate
- error function
- adaptive learning rate
- learning algorithm
- weight vector
- search space
- scheduling problem
- feed forward neural networks
- convergence speed
- multilayer neural networks
- neural network model
- fuzzy logic
- genetic algorithm
- rapid convergence
- feedforward neural networks
- fuzzy neural network
- training speed
- artificial neural networks
- back propagation
- recurrent neural networks
- neural nets
- machine learning
- radial basis function
- global optimization
- multi layer perceptron
- feed forward
- training process