Do We Need Zero Training Loss After Achieving Zero Training Error?
Takashi IshidaIkko YamaneTomoya SakaiGang NiuMasashi SugiyamaPublished in: ICML (2020)
Keyphrases
- training error
- error rate
- generalization error
- adaboost algorithm
- gradient method
- hidden layer
- classification error
- training set
- prediction error
- base classifiers
- face detection
- test set
- training process
- supervised learning
- multiclass classification
- multi class
- learning algorithm
- neural network
- back propagation
- binary classification
- boosting algorithms
- active learning