Accelerating training of DNN in distributed machine learning system with shared memory.
Eun-Ji LimShin-Young AhnWan ChoiPublished in: ICTC (2017)
Keyphrases
- shared memory
- machine learning
- training process
- message passing
- commodity hardware
- interprocess communication
- parallel algorithm
- low overhead
- parallel execution
- distributed systems
- distributed memory
- parallel computing
- multi processor
- parallel architectures
- parallel programming
- fault tolerant
- distributed environment
- parallel computers
- address space
- heterogeneous platforms
- parallel computation
- parallel architecture
- map reduce
- parallel machines
- peer to peer
- single processor
- high performance computing
- parallel processing
- belief propagation
- shared memory multiprocessor
- image sequences