Overflowing emerging neural network inference tasks from the GPU to the CPU on heterogeneous servers.
Adithya KumarAnand SivasubramaniamTimothy ZhuPublished in: SYSTOR (2022)
Keyphrases
- neural network
- heterogeneous computing
- graphics processing units
- neural network model
- artificial neural networks
- graphics processors
- gpu implementation
- data mining
- fuzzy logic
- databases
- multi layer perceptron
- parallel implementation
- probabilistic inference
- back propagation
- computing systems
- belief networks
- data center
- data transfer
- fault diagnosis
- graphics hardware
- general purpose
- scalable distributed
- neural network is trained
- mobile robot