SS-SAM : Stochastic Scheduled Sharpness-Aware Minimization for Efficiently Training Deep Neural Networks.
Yang ZhaoHao ZhangXiuyuan HuPublished in: CoRR (2022)
Keyphrases
- neural network
- training process
- training algorithm
- feedforward neural networks
- pattern recognition
- deep architectures
- multi layer perceptron
- backpropagation algorithm
- back propagation
- neural network training
- scheduling problem
- feed forward neural networks
- artificial neural networks
- training phase
- training set
- data sets
- self organizing maps
- neural network structure
- hidden layer
- stochastic model
- error back propagation
- multi layer
- objective function
- multilayer perceptron
- bayesian networks
- learning algorithm
- training samples
- training examples
- fault diagnosis
- recurrent networks
- training patterns
- hopfield neural network
- deep learning
- learning automata
- monte carlo
- feed forward
- activation function
- information content