A low power neural network training processor with 8-bit floating point with a shared exponent bias and fused multiply add trees.
Jeongwoo ParkSunwoo LeeDongsuk JeonPublished in: AICAS (2022)
Keyphrases
- floating point
- low power
- neural network training
- high speed
- single chip
- instruction set
- power consumption
- gate array
- neural network
- floating point arithmetic
- low cost
- training algorithm
- fixed point
- decision trees
- floating point unit
- image sensor
- optimization method
- low power consumption
- cmos technology
- power reduction
- nm technology
- data fusion
- graphics processing units
- particle swarm optimisation
- logic circuits
- mixed signal
- computer architecture
- sufficient conditions
- general purpose
- support vector machine