End-to-End DNN Inference on a Massively Parallel Analog In Memory Computing Architecture.
Nazareno BruschiGiuseppe TagliaviniAngelo GarofaloFrancesco ContiIrem BoybatLuca BeniniDavide RossiPublished in: CoRR (2022)
Keyphrases
- end to end
- massively parallel
- processing elements
- parallel computing
- parallel computers
- fine grained
- congestion control
- parallel machines
- high performance computing
- admission control
- real time
- transport layer
- parallel architectures
- high bandwidth
- bayesian networks
- real world
- application layer
- random access
- associative memory
- probabilistic model