Improving system latency of AI accelerator with on-chip pipelined activation preprocessing and multi-mode batch inference.
Wenxuan ChenZheng WangMing LeiBo DongZhuo WangYongkui YangChao ChenWeiyu GuoChen LiangQian ZhangWenqi FangZhibin YuPublished in: AICAS (2021)
Keyphrases
- preprocessing
- artificial intelligence
- expert systems
- low cost
- machine learning
- intelligent systems
- inference process
- information processing
- high speed
- bayesian networks
- feature extraction
- case based reasoning
- probabilistic inference
- bayesian inference
- ai systems
- response time
- wireless sensor networks
- knowledge based systems
- data flow
- preprocessing phase
- analog vlsi
- compute intensive