Parallel Training of Deep Networks with Local Updates.
Michael LaskinLuke MetzSeth NabarraoMark SaroufimBadreddine NouneCarlo LuschiJascha Sohl-DicksteinPieter AbbeelPublished in: CoRR (2020)
Keyphrases
- recurrent networks
- supervised learning
- social networks
- heterogeneous networks
- parallel processing
- training set
- massively parallel
- deep architectures
- parallel implementation
- shared memory
- computer networks
- network structure
- test set
- network analysis
- serious games
- complex networks
- parallel computing
- training phase
- training examples
- network size
- unsupervised learning
- peer to peer