Fast generalization error bound of deep learning without scale invariance of activation functions.
Yoshikazu TeradaRyoma HirosePublished in: Neural Networks (2020)
Keyphrases
- error bounds
- deep learning
- scale invariance
- activation function
- neural network
- feed forward
- scale space
- hidden layer
- theoretical analysis
- machine learning
- unsupervised learning
- artificial neural networks
- back propagation
- worst case
- neural nets
- natural images
- multilayer perceptron
- learning rate
- radial basis function
- basis functions
- recurrent neural networks
- rbf neural network
- multiscale
- decision trees
- dimensionality reduction
- co occurrence
- image processing