HyPar-Flow: Exploiting MPI and Keras for Scalable Hybrid-Parallel DNN Training using TensorFlow.
Ammar Ahmad AwanArpan JainQuentin AnthonyHari SubramoniDhabaleswar K. PandaPublished in: CoRR (2019)
Keyphrases
- training process
- shared memory
- parallel implementation
- parallelization strategy
- parallel programming
- message passing interface
- distributed memory
- parallel computing
- message passing
- massively parallel
- supervised learning
- parallel algorithm
- parallel tree search
- multi processor
- training examples
- parallel genetic algorithm
- parallel architectures
- general purpose
- high performance computing
- training algorithm
- web scale
- neural network
- test set
- training samples
- graphical models
- training set
- training data
- image sequences
- machine learning