Login / Signup

BandiTS: Dynamic timing speculation using multi-armed bandit based optimization.

Jeff Jun ZhangSiddharth Garg
Published in: DATE (2017)
Keyphrases
  • multi armed bandit
  • multi armed bandits
  • decision trees
  • reinforcement learning
  • pairwise
  • machine learning
  • decision making
  • active learning
  • naive bayes
  • information theoretic
  • prediction error
  • regret bounds