ClosNets: Batchless DNN Training with On-Chip a Priori Sparse Neural Topologies.
Mihailo IsakovAlan EhretMichel A. KinsyPublished in: FPL (2018)
Keyphrases
- training process
- neural network
- high speed
- network architecture
- low cost
- avoid overfitting
- compressed sensing
- training algorithm
- training examples
- training data
- sparse data
- training phase
- multi layer perceptron
- online learning
- recurrent networks
- associative memory
- evolvable hardware
- feed forward neural networks
- real time
- data sets
- compressive sensing
- high density
- learning algorithm
- semi supervised
- supervised learning
- training samples