Scalable One-Pass Optimisation of High-Dimensional Weight-Update Hyperparameters by Implicit Differentiation.
Ross M. ClarkeElre Talea OldewageJosé Miguel Hernández-LobatoPublished in: ICLR (2022)
Keyphrases
- hyperparameters
- weight update
- high dimensional
- model selection
- parameter space
- cross validation
- closed form
- bayesian inference
- support vector
- random sampling
- neural network
- bayesian framework
- em algorithm
- prior information
- maximum likelihood
- maximum a posteriori
- noise level
- sample size
- low dimensional
- incomplete data
- incremental learning
- online training
- gaussian processes
- back propagation
- learning rate
- high dimensional data
- dimensionality reduction
- missing values
- genetic algorithm
- data points
- parameter settings
- expectation maximization
- feature space
- gradient vector
- training set
- spiking neural networks
- feature selection