Fixed-point optimization of deep neural networks with adaptive step size retraining.
Sungho ShinYoonho BooWonyong SungPublished in: CoRR (2017)
Keyphrases
- fixed point
- step size
- neural network
- variable step size
- faster convergence
- steepest descent method
- stochastic gradient descent
- line search
- convergence rate
- cost function
- convergence speed
- sufficient conditions
- dynamical systems
- optimization algorithm
- temporal difference
- floating point
- optimization problems
- belief propagation
- artificial neural networks
- global convergence
- fixed point theorem
- genetic algorithm
- wavelet coefficients
- variational inequalities
- gradient method
- multi objective
- multiscale
- global optimization
- policy iteration
- differential evolution
- least squares
- multiresolution
- feature vectors
- search space