Layer-Parallel Training with GPU Concurrency of Deep Residual Neural Networks Via Nonlinear Multigrid.
Andrew C. KirbySiddharth SamsiMichael JonesAlbert ReutherJeremy KepnerVijay GadepallyPublished in: CoRR (2020)
Keyphrases
- neural network
- multi layer
- parallel implementation
- training process
- training algorithm
- error back propagation
- parallel processing
- parallel computation
- parallel programming
- feed forward neural networks
- feedforward neural networks
- neural network training
- single layer
- real time
- multiscale
- multiple layers
- auto associative
- multi layer perceptron
- multiresolution
- cellular neural networks
- pattern recognition
- backpropagation algorithm
- nonlinear dynamic systems
- image analysis
- recurrent neural networks
- artificial neural networks
- recurrent networks
- parallel computing
- optic flow computation
- sparse linear
- neural network structure
- back propagation
- genetic algorithm
- graphics processing units
- parallel algorithm
- cluster of workstations
- neural nets
- database systems
- radial basis function network
- gpu implementation
- activation function
- parallel hardware
- deep architectures
- training set
- general purpose
- nonlinear functions
- gauss seidel
- training data
- fuzzy logic
- deep learning
- neural network model
- concurrency control
- parallel architectures
- hidden layer