FedAvg Converges to Zero Training Loss Linearly for Overparameterized Multi-Layer Neural Networks.
Bingqing SongPrashant KhanduriXinwei ZhangJinfeng YiMingyi HongPublished in: ICML (2023)
Keyphrases
- multi layer
- error back propagation
- feed forward neural networks
- neural network
- neural nets
- training process
- multi layer perceptron
- training algorithm
- single layer
- back propagation
- feed forward
- data mining
- multiple layers
- multilayer perceptron
- learning tasks
- text classification
- artificial neural networks
- multiscale
- feature extraction