Scalable K-FAC Training for Deep Neural Networks With Distributed Preconditioning.
Lin ZhangShaohuai ShiWei WangBo LiPublished in: IEEE Trans. Cloud Comput. (2023)
Keyphrases
- neural network
- training process
- training algorithm
- feedforward neural networks
- conjugate gradient
- error back propagation
- lightweight
- scalable distributed
- distributed systems
- feed forward neural networks
- deep architectures
- pattern recognition
- backpropagation algorithm
- neural network training
- commodity hardware
- artificial neural networks
- high scalability
- feed forward
- back propagation
- fully distributed
- training phase
- data intensive
- distributed environment
- fault diagnosis
- multi layer
- multi layer perceptron
- fuzzy logic
- recurrent networks
- multi agent
- iterative methods
- hidden layer
- data sets
- online learning
- training set
- genetic algorithm
- activation function
- communication cost
- multilayer perceptron
- neural network model
- cooperative
- distributed storage