Login / Signup
Continuous Time Bandits With Sampling Costs.
Rahul Vaze
Manjesh Kumar Hanawal
Published in:
CoRR (2021)
Keyphrases
</>
markov chain
random sampling
total cost
markov processes
multi armed bandit
sample size
stochastic systems
multi armed bandits
monte carlo
sampling algorithm
sampling strategy
data sets
genetic algorithm
feature selection
optimal control
sampling strategies