Fast generalization error bound of deep learning without scale invariance of activation functions.
Yoshikazu TeradaRyoma HirosePublished in: CoRR (2019)
Keyphrases
- error bounds
- deep learning
- scale invariance
- activation function
- neural network
- scale space
- artificial neural networks
- feed forward
- theoretical analysis
- hidden layer
- worst case
- machine learning
- unsupervised learning
- learning rate
- neural nets
- back propagation
- multilayer perceptron
- radial basis function
- natural images
- basis functions
- multiscale