Improving Inference Latency and Energy of DNNs through Wireless Enabled Multi-Chip-Module-based Architectures and Model Parameters Compression.
Giuseppe AsciaVincenzo CataniaAndrea MineoSalvatore MonteleoneMaurizio PalesiDavide PattiPublished in: NOCS (2020)
Keyphrases
- ultra low power
- low power
- hyperspectral image compression
- high speed
- low cost
- power consumption
- heterogeneous computing
- prefetching
- low latency
- image compression
- energy consumption
- data compression
- mobile devices
- compression algorithm
- compression ratio
- wireless communication
- operating system
- host computer
- wireless broadcast