Self-Regularity of Non-Negative Output Weights for Overparameterized Two-Layer Neural Networks.
David GamarnikEren C. KizildagIlias ZadikPublished in: CoRR (2021)
Keyphrases
- neural network
- output layer
- hidden layer
- hidden nodes
- hidden units
- activation function
- multi layer
- feed forward
- back propagation
- rbf network
- feedforward neural networks
- artificial neural networks
- multilayer perceptron
- radial basis function
- synaptic weights
- extreme learning machine
- pattern recognition
- multi layer perceptron
- positive and negative
- hidden neurons
- number of hidden units
- recurrent neural networks
- desired output
- connection weights
- rbf neural network
- weighted sum
- fuzzy logic
- linear combination
- fault diagnosis
- feed forward neural networks
- genetic algorithm
- expert systems
- neural nets
- input output
- neural network model
- network architecture
- training algorithm
- weight update
- nonlinear functions
- rule extraction
- weighting scheme
- application layer
- input data
- single layer
- image sequences
- fuzzy systems
- bp neural network
- machine learning
- data sets