Thompson Sampling for the MNL-Bandit.
Shipra AgrawalVashist AvadhanulaVineet GoyalAssaf ZeeviPublished in: COLT (2017)
Keyphrases
- random sampling
- sampling algorithm
- sample size
- sampling methods
- active learning
- multi armed bandit
- sampling strategy
- markov chain monte carlo
- sampling strategies
- multinomial logit
- sparse sampling
- sampling rate
- markov chain
- real world
- upper bound
- monte carlo
- image reconstruction
- search algorithm
- similarity measure
- website
- image processing
- search engine
- artificial intelligence
- data mining
- neural network
- databases