Self-Regularity of Output Weights for Overparameterized Two-Layer Neural Networks.
David GamarnikEren C. KizildagIlias ZadikPublished in: ISIT (2021)
Keyphrases
- neural network
- output layer
- hidden nodes
- hidden layer
- activation function
- feed forward
- hidden units
- multi layer
- back propagation
- feedforward neural networks
- rbf network
- multilayer perceptron
- desired output
- artificial neural networks
- synaptic weights
- feed forward neural networks
- fuzzy logic
- number of hidden units
- hidden neurons
- connection weights
- linear combination
- multi layer perceptron
- single layer
- weighted sum
- extreme learning machine
- pattern recognition
- application layer
- genetic algorithm
- rbf neural network
- radial basis function
- training algorithm
- neural nets
- recurrent neural networks
- neural network model
- fault diagnosis
- artificial intelligence
- weighting scheme
- learning rules
- training process
- associative memory
- weight update
- training data