HyPar-Flow: Exploiting MPI and Keras for Scalable Hybrid-Parallel DNN Training with TensorFlow.
Ammar Ahmad AwanArpan JainQuentin AnthonyHari SubramoniDhabaleswar K. PandaPublished in: ISC (2020)
Keyphrases
- parallel implementation
- shared memory
- training process
- message passing interface
- parallelization strategy
- parallel programming
- distributed memory
- parallel computing
- parallel algorithm
- multi processor
- message passing
- massively parallel
- parallel tree search
- parallel processing
- parallel genetic algorithm
- training examples
- parallel computers
- high performance computing
- training algorithm
- commodity hardware
- training set
- training samples
- web scale
- wireless sensor networks
- online learning
- map reduce
- support vector
- test set
- training data
- highly scalable
- parallel architectures
- flow patterns
- feature space
- computing systems
- neural network