Ultra-Sparse Classifiers Through Minimizing the VC Dimension in the Empirical Feature Space - Submitted to the Special Issue on "Off the Mainstream: Advances in Neural Networks and Machine Learning for Pattern Recognition".
JayadevaMayank SharmaSumit SomanHimanshu PantPublished in: Neural Process. Lett. (2018)
Keyphrases
- special issue
- pattern recognition
- vc dimension
- machine learning
- feature space
- statistical learning theory
- neural network
- learning machines
- empirical risk minimization
- feature selection
- generalization bounds
- high dimensional
- machine learning algorithms
- risk bounds
- uniform convergence
- training samples
- sample complexity
- training set
- upper bound
- feature set
- dimensionality reduction
- decision trees
- concept classes
- inductive inference
- lower bound
- sample size
- supervised classification
- learning problems
- support vector machine
- feature extraction
- ai edam
- support vector
- svm classifier
- learning algorithm
- kernel methods
- training data
- active learning
- supervised learning
- worst case
- kernel function
- linear classifiers
- feature vectors
- theoretical analysis
- principal component analysis
- learning models
- hyperplane
- computational intelligence
- input space
- data points
- training examples
- gaussian kernels
- artificial neural networks
- semi supervised learning
- generalization error
- euclidean space
- reinforcement learning
- transfer learning
- support vector machine svm
- compression scheme
- learning theory
- kernel machines
- feature subset
- image processing
- learning tasks
- function classes
- euclidean distance