The Optimal Ridge Penalty for Real-world High-dimensional Data Can Be Zero or Negative due to the Implicit Ridge Regularization.
Dmitry KobakJonathan LomondBenoit SanchezPublished in: J. Mach. Learn. Res. (2020)
Keyphrases
- high dimensional data
- real world
- data sets
- nearest neighbor
- high dimensional
- dimensionality reduction
- low dimensional
- subspace clustering
- high dimensionality
- data points
- clustering high dimensional data
- similarity search
- high dimensions
- data analysis
- high dimensional datasets
- dimension reduction
- original data
- data distribution
- input space
- low rank
- linear discriminant analysis
- dimensional data
- subspace learning
- nonlinear dimensionality reduction
- high dimensional spaces
- lower dimensional
- manifold learning
- data mining
- small sample size
- variable selection
- high dimensional data sets
- sparse representation
- image processing
- objective function
- multi dimensional
- regularization parameter
- machine learning
- face recognition