Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design.
Niranjan SrinivasAndreas KrauseSham M. KakadeMatthias W. SeegerPublished in: ICML (2010)
Keyphrases
- gaussian process
- experimental design
- upper confidence bound
- regret bounds
- gaussian processes
- multi armed bandit
- hyperparameters
- active learning
- regression model
- model selection
- random sampling
- semi supervised
- empirical studies
- sample size
- bayesian framework
- online learning
- bandit problems
- latent variables
- lower bound
- worst case
- game play
- class imbalance
- virtual learning environments
- feature selection
- data sets
- video games
- cross validation
- error rate
- nearest neighbor
- high dimensional