Login / Signup

LCM: LLM-focused Hybrid SPM-cache Architecture with Cache Management for Multi-Core AI Accelerators.

Chengtao LaiZhongchun ZhouAkash PoptaniWei Zhang
Published in: ICS (2024)
Keyphrases
  • cache management
  • distributed object
  • query processing
  • garbage collection
  • client server
  • prefetching
  • transaction management
  • computing systems
  • management system
  • data model
  • database server