Login / Signup
Adaptive Monte Carlo Multiple Testing via Multi-Armed Bandits.
Martin J. Zhang
James Zou
David Tse
Published in:
CoRR (2019)
Keyphrases
</>
monte carlo
adaptive sampling
multi armed bandits
importance sampling
markov chain
monte carlo simulation
markovian decision
variance reduction
monte carlo methods
particle filter
optimal strategy
matrix inversion
decision making
search algorithm