Cloud Al 100 : 12TOPS/W Scalable, High Performance and Low Latency Deep Learning Inference Accelerator.
Karam ChathaPublished in: HCS (2021)
Keyphrases
- low latency
- deep learning
- virtual machine
- massive scale
- high throughput
- highly efficient
- high speed
- unsupervised learning
- real time
- unsupervised feature learning
- continuous query processing
- machine learning
- cloud computing
- weakly supervised
- operating system
- data center
- learning strategies
- mental models
- stream processing
- bayesian networks
- sensor networks
- mobile nodes
- data model
- query processing
- pairwise