Login / Signup
AdaSAM: Boosting Sharpness-Aware Minimization with Adaptive Learning Rate and Momentum for Training Deep Neural Networks.
Hao Sun
Li Shen
Qihuang Zhong
Liang Ding
Shixiang Chen
Jingwei Sun
Jing Li
Guangzhong Sun
Dacheng Tao
Published in:
CoRR (2023)
Keyphrases
</>
adaptive learning rate
learning rate
neural network
training algorithm
learning algorithm
hidden layer
convergence rate
training process
back propagation
artificial neural networks
convergence speed
training set
evolutionary algorithm
machine learning
feature selection
objective function