Layer-Parallel Training with GPU Concurrency of Deep Residual Neural Networks via Nonlinear Multigrid.
Andrew C. KirbySiddharth SamsiMichael JonesAlbert ReutherJeremy KepnerVijay GadepallyPublished in: HPEC (2020)
Keyphrases
- neural network
- multi layer
- training process
- parallel implementation
- training algorithm
- feed forward neural networks
- parallel processing
- parallel computation
- error back propagation
- parallel computing
- parallel programming
- pattern recognition
- graphics processing units
- back propagation
- neural network training
- multiple layers
- multi layer perceptron
- feedforward neural networks
- single layer
- backpropagation algorithm
- multiresolution
- neural nets
- real time
- image analysis
- artificial neural networks
- nonlinear dynamic systems
- cluster of workstations
- optic flow computation
- genetic algorithm
- database systems
- multilayer perceptron
- fuzzy logic
- multiscale
- concurrency control
- sparse linear
- boundary conditions
- recurrent neural networks
- activation function
- neural network model
- deep architectures
- parallel algorithm
- auto associative
- training set
- general purpose
- radial basis function network
- shared memory
- massively parallel