ShadowSync: Performing Synchronization in the Background for Highly Scalable Distributed Training.
Qinqing ZhengBor-Yiing SuJiyan YangAlisson G. AzzoliniQiang WuOu JinShri KarandikarHagay LupeskoLiang XiongEric ZhouPublished in: CoRR (2020)
Keyphrases
- highly scalable
- distributed systems
- data partitioning
- web caching
- concurrent processes
- multi agent
- training set
- machine learning
- communication overhead
- feedforward neural networks
- training algorithm
- fault tolerant
- supervised learning
- neural network
- genetic algorithm
- peer to peer
- distributed environment
- test set
- communication cost
- serious games
- online learning
- wireless sensor networks
- cooperative
- e learning
- computer vision
- chaotic systems
- distributed network
- training examples
- training samples