Gradient Descent Maximizes the Margin of Homogeneous Neural Networks.
Kaifeng LyuJian LiPublished in: ICLR (2020)
Keyphrases
- neural network
- objective function
- learning rules
- cost function
- pattern recognition
- artificial neural networks
- fuzzy logic
- support vector
- real time
- back propagation
- loss function
- genetic algorithm
- recurrent neural networks
- fuzzy neural network
- feedforward neural networks
- margin maximization
- maximum margin
- activation function
- associative memory
- neural network model
- fault diagnosis
- artificial intelligence
- fuzzy systems
- hidden layer
- training algorithm
- kernel function
- rule extraction
- training set