Acceleration of Large Deep Learning Training with Hybrid GPU Memory Management of Swapping and Re-computing.
Haruki ImaiTung D. LeYasushi NegishiKiyokuni KawachiyaPublished in: IEEE BigData (2020)
Keyphrases
- deep learning
- memory management
- deep architectures
- restricted boltzmann machine
- parallel computation
- unsupervised learning
- operating system
- unsupervised feature learning
- machine learning
- hardware implementation
- supervised learning
- training set
- mental models
- real time
- parallel implementation
- weakly supervised
- pattern recognition
- higher order
- general purpose
- training examples
- parallel algorithm
- computing environments
- parallel computing
- information extraction
- reinforcement learning