Scalable K-FAC Training for Deep Neural Networks with Distributed Preconditioning.
Lin ZhangShaohuai ShiWei WangBo LiPublished in: CoRR (2022)
Keyphrases
- neural network
- training process
- training algorithm
- feedforward neural networks
- feed forward neural networks
- distributed systems
- neural network structure
- scalable distributed
- lightweight
- backpropagation algorithm
- multi layer perceptron
- fuzzy logic
- pattern recognition
- multi agent
- recurrent networks
- training examples
- back propagation
- deep architectures
- communication cost
- inverse problems
- edge preserving
- distributed storage
- recurrent neural networks
- neural network model
- cooperative
- genetic algorithm
- multi layer
- commodity hardware
- neural network training
- fully distributed
- conjugate gradient
- self organizing maps
- training phase
- distributed environment
- training samples
- mobile agents
- peer to peer
- supervised learning
- data intensive
- high scalability
- image reconstruction
- image sequences
- decision trees