DART: Diversify-Aggregate-Repeat Training Improves Generalization of Neural Networks.
Samyak JainSravanti AddepalliPawan SahuPriyam DeyR. Venkatesh BabuPublished in: CoRR (2023)
Keyphrases
- neural network
- training process
- training algorithm
- neural network training
- backpropagation algorithm
- feedforward neural networks
- feed forward neural networks
- training phase
- multi layer perceptron
- artificial neural networks
- training patterns
- learning machines
- back propagation
- pattern recognition
- test set
- recurrent neural networks
- multilayer perceptron
- training set
- artificial intelligence
- feed forward
- recurrent networks
- genetic algorithm
- data sets
- hidden layer
- neural network model
- online learning
- supervised learning
- multi class
- computer vision
- neural network structure
- error back propagation