Do We Need Zero Training Loss After Achieving Zero Training Error?
Takashi IshidaIkko YamaneTomoya SakaiGang NiuMasashi SugiyamaPublished in: CoRR (2020)
Keyphrases
- training error
- error rate
- generalization error
- classification error
- gradient method
- hidden layer
- adaboost algorithm
- training set
- training samples
- prediction error
- error bounds
- supervised learning
- learning algorithm
- model selection
- loss function
- cross validation
- test set
- boosting algorithms
- multiclass classification
- artificial neural networks