A Bi-layered Parallel Training Architecture for Large-scale Convolutional Neural Networks.
Jianguo ChenKenli LiKashif BilalXu ZhouKeqin LiPhilip S. YuPublished in: CoRR (2018)
Keyphrases
- training samples
- convolutional neural networks
- multi processor
- training examples
- real time
- parallel processing
- parallel architecture
- distributed processing
- real world
- master slave
- convolutional network
- management system
- parallel computers
- computer architecture
- parallel implementation
- training phase
- shared memory
- case study
- processing units
- distributed memory
- multi layer
- small scale
- hidden markov models
- level parallelism
- neural network