Hiding message delivery and reducing memory access latency by providing direct-to-cache transfer during receive operations in a message passing environment.
Farshad KhunjushNikitas J. DimopoulosPublished in: SIGARCH Comput. Archit. News (2006)
Keyphrases
- message passing
- access latency
- memory access
- shared memory
- prefetching
- belief propagation
- cache hit ratio
- distributed systems
- message delivery
- data access
- markov random field
- response time
- main memory
- scheduling algorithm
- processing units
- graphical models
- high speed
- query processing
- access patterns
- external memory
- data structure