Deferred Dropout: An Algorithm-Hardware Co-Design DNN Training Method Provisioning Consistent High Activation Sparsity.
Kangkyu ParkYunki HanLee-Sup KimPublished in: ICCAD (2021)
Keyphrases
- high efficiency
- computational cost
- improved algorithm
- high accuracy
- dynamic programming
- training process
- clustering method
- experimental evaluation
- cost function
- significant improvement
- objective function
- computational complexity
- recognition algorithm
- detection method
- learning algorithm
- optimization algorithm
- theoretical analysis
- computationally efficient
- preprocessing
- tree structure
- segmentation algorithm
- training phase
- detection algorithm
- synthetic and real images
- similarity measure
- reconstruction method
- k means
- parallel implementation
- estimation algorithm
- classification method
- optimization method
- convergence rate
- input data
- training algorithm
- selection algorithm
- support vector machine svm
- training samples
- feed forward neural networks
- stochastic gradient descent
- hardware implementation
- test images
- classification algorithm
- segmentation method
- energy function
- support vector machine
- probabilistic model
- negative matrix factorization
- training data
- clustering algorithm