SuperNeurons: FFT-based Gradient Sparsification in the Distributed Training of Deep Neural Networks.
Linnan WangWei WuYiyang ZhaoJunyu ZhangHang LiuGeorge BosilcaJack J. DongarraMaurice HerlihyRodrigo FonsecaPublished in: CoRR (2018)
Keyphrases
- neural network
- training process
- training algorithm
- feedforward neural networks
- backpropagation algorithm
- distributed systems
- deep architectures
- multi layer perceptron
- multi agent
- feed forward neural networks
- artificial neural networks
- error back propagation
- mobile agents
- back propagation
- supervised learning
- training samples
- neural network structure
- feed forward
- fuzzy logic
- neural network training
- cooperative
- data sets
- genetic algorithm
- training phase
- pattern recognition
- neural nets
- recurrent neural networks
- radial basis function
- distributed environment
- frequency domain
- communication cost
- multi layer
- training set
- support vector
- neural network model
- multiscale
- deep learning
- machine learning
- edge detection