Extending the Message Passing Interface (MPI) with User-Level Schedules.
Derek SchaferSheikh K. GhafoorDaniel J. HolmesMartin RuefenachtAnthony SkjellumPublished in: CoRR (2019)
Keyphrases
- message passing interface
- parallel implementation
- message passing
- high performance computing
- parallel algorithm
- massively parallel
- parallel architectures
- parallel programming
- shared memory
- scheduling problem
- belief propagation
- processing units
- parallel computing
- parallel computation
- markov random field
- parallel execution
- database systems