Self-Regularity of Non-Negative Output Weights for Overparameterized Two-Layer Neural Networks.
David GamarnikEren C. KizildagIlias ZadikPublished in: IEEE Trans. Signal Process. (2022)
Keyphrases
- neural network
- output layer
- hidden layer
- hidden nodes
- activation function
- multi layer
- feed forward
- hidden units
- back propagation
- feedforward neural networks
- artificial neural networks
- multi layer perceptron
- number of hidden units
- synaptic weights
- multilayer perceptron
- positive and negative
- rbf network
- extreme learning machine
- desired output
- connection weights
- pattern recognition
- neural network model
- hidden neurons
- input output
- data sets
- weight update
- genetic algorithm
- single layer
- linear combination
- self organizing maps
- training algorithm
- learning rate
- recurrent neural networks
- fuzzy logic
- radial basis function
- neural nets
- input variables
- rbf neural network