Exploiting Invariance in Training Deep Neural Networks.
Chengxi YeXiong ZhouTristan McKinneyYanfeng LiuQinggang ZhouFedor ZhdanovPublished in: CoRR (2021)
Keyphrases
- neural network
- training algorithm
- training process
- multi layer perceptron
- feed forward neural networks
- backpropagation algorithm
- feedforward neural networks
- back propagation
- deep architectures
- training patterns
- radial basis function network
- pattern recognition
- genetic algorithm
- neural nets
- multi layer
- recurrent networks
- invariant features
- training phase
- artificial neural networks
- training set
- neural network training
- neural network structure
- error back propagation
- discriminative power
- network architecture
- hidden layer
- recurrent neural networks
- test set
- supervised learning
- computer vision