Implicit Bias of Gradient Descent for Wide Two-layer Neural Networks Trained with the Logistic Loss.
Lénaïc ChizatFrancis R. BachPublished in: CoRR (2020)
Keyphrases
- neural network
- multilayer perceptron
- multi layer
- trained neural networks
- training process
- multi layer perceptron
- auto associative
- trained neural network
- synaptic weights
- cost function
- single layer
- learning rules
- artificial neural networks
- backpropagation algorithm
- multilayer neural network
- neural network model
- back propagation
- pattern recognition
- genetic algorithm
- rule extraction
- fuzzy logic
- wide range
- training algorithm
- neural nets
- feed forward
- training set
- fault diagnosis
- feedforward neural networks
- radial basis function
- neural learning
- objective function
- elman network
- biologically plausible
- hidden units
- application layer
- conjugate gradient
- multiple layers
- error function
- hidden layer
- fuzzy systems
- associative memory
- svm classifier