A 0.32-128 TOPS, Scalable Multi-Chip-Module-Based Deep Neural Network Inference Accelerator With Ground-Referenced Signaling in 16 nm.
Brian ZimmerRangharajan VenkatesanYakun Sophia ShaoJason ClemonsMatthew FojtikNan JiangBen KellerAlicia KlinefelterNathaniel Ross PinckneyPriyanka RainaStephen G. TellYanqing ZhangWilliam J. DallyJoel S. EmerC. Thomas GrayStephen W. KecklerBrucek KhailanyPublished in: IEEE J. Solid State Circuits (2020)
Keyphrases
- neural network
- high speed
- low cost
- neural network model
- host computer
- bayesian networks
- bayesian inference
- probabilistic inference
- belief networks
- genetic algorithm
- cmos technology
- high density
- prediction model
- multilayer perceptron
- self organizing maps
- fuzzy neural network
- web scale
- back propagation
- graphical models
- scalable video coding
- inference process
- circuit design
- fuzzy logic
- programmable logic
- neural network is trained
- pattern recognition
- nm technology