Loss Landscapes are All You Need: Neural Network Generalization Can Be Explained Without the Implicit Bias of Gradient Descent.
Ping-yeh ChiangRenkun NiDavid Yu MillerArpit BansalJonas GeipingMicah GoldblumTom GoldsteinPublished in: ICLR (2023)
Keyphrases
- neural network
- cost function
- learning rules
- neural network model
- pattern recognition
- recurrent neural networks
- artificial neural networks
- fuzzy logic
- image reconstruction from projections
- fitness landscape
- learning vector quantization
- feed forward neural networks
- fuzzy neural network
- self organizing maps
- loss function
- multi layer perceptron
- neural network is trained
- objective function
- inductive bias
- back propagation
- radial basis function
- associative memory
- fault diagnosis
- feed forward
- training process
- multi layer
- trade off
- back propagation neural network
- expert systems
- variance reduction
- prediction model
- genetic algorithm