Homotopy Relaxation Training Algorithms for Infinite-Width Two-Layer ReLU Neural Networks.
Yahong YangQipin ChenWenrui HaoPublished in: CoRR (2023)
Keyphrases
- neural network
- multi layer
- neural network training
- data structure
- computationally efficient
- machine learning
- convergence rate
- combinatorial optimization
- theoretical analysis
- significant improvement
- learning algorithm
- machine learning algorithms
- back propagation
- times faster
- worst case
- learning rate
- training process
- training phase
- feed forward neural networks