Partitioning sparse deep neural networks for scalable training and inference.
Gunduz Vehbi DemirciHakan FerhatosmanogluPublished in: ICS (2021)
Keyphrases
- neural network
- training process
- training algorithm
- feedforward neural networks
- feed forward neural networks
- multi layer perceptron
- pattern recognition
- training set
- neural network training
- structured prediction
- back propagation
- deep architectures
- artificial neural networks
- avoid overfitting
- neural network model
- highly scalable
- sparse data
- backpropagation algorithm
- bayesian networks
- sparse representation
- training examples
- unsupervised learning
- probabilistic inference
- online learning
- fuzzy logic
- training phase
- error back propagation
- belief nets
- linear svm
- inference process
- neural network structure
- neural nets
- multi layer
- recurrent neural networks
- feed forward
- model selection
- supervised learning
- high dimensional
- expert systems
- support vector
- training data
- genetic algorithm