Login / Signup
Automated State-Dependent Importance Sampling for Markov Jump Processes via Sampling from the Zero-Variance Distribution.
Adam W. Grace
Dirk P. Kroese
Werner Sandmann
Published in:
J. Appl. Probab. (2014)
Keyphrases
</>
importance sampling
markov chain
state dependent
stationary distribution
variance reduction
steady state
monte carlo
large deviations
transition probabilities
variance estimator
markov chain monte carlo
markov model
random walk
state space
service times
single server
queueing networks