Login / Signup
LCM: LLM-focused Hybrid SPM-cache Architecture with Cache Management for Multi-Core AI Accelerators.
Chengtao Lai
Zhongchun Zhou
Akash Poptani
Wei Zhang
Published in:
ICS (2024)
Keyphrases
</>
cache management
distributed object
query processing
garbage collection
client server
prefetching
transaction management
computing systems
management system
data model
database server