Gradient Descent Maximizes the Margin of Homogeneous Neural Networks.
Kaifeng LyuJian LiPublished in: CoRR (2019)
Keyphrases
- neural network
- objective function
- pattern recognition
- learning rules
- cost function
- back propagation
- genetic algorithm
- fuzzy systems
- recurrent neural networks
- neural network model
- artificial neural networks
- hopfield neural network
- feed forward
- multilayer perceptron
- cellular neural networks
- multi layer
- support vector
- maximum margin
- error function
- rule extraction
- margin maximization
- decision boundary
- fuzzy neural network
- network architecture
- hidden layer
- neural nets
- self organizing maps
- fuzzy logic