Optimization of GPU Memory Usage for Training Deep Neural Networks.
Che-Lun HungChine-fu HsinHsiao-Hsi WangChuan Yi TangPublished in: I-SPAN (2019)
Keyphrases
- memory usage
- neural network
- training process
- training algorithm
- feedforward neural networks
- pattern recognition
- feed forward neural networks
- memory footprint
- back propagation
- neural network training
- memory requirements
- multi layer perceptron
- optimization problems
- genetic algorithm
- training samples
- training patterns
- training examples
- neural network structure
- global optimization
- training set
- error back propagation
- real time
- multi layer
- gpu implementation
- constrained optimization
- hidden layer
- optimization process
- backpropagation algorithm
- optimization method
- optimization algorithm
- general purpose
- fuzzy logic
- highly non linear
- deep architectures
- multilayer neural network
- recurrent networks
- multilayer perceptron
- parallel implementation
- recurrent neural networks
- parallel processing
- feed forward
- test set
- semi supervised
- evolutionary algorithm
- artificial neural networks
- training data
- machine learning
- data sets