• search
    search
  • reviewers
    reviewers
  • feeds
    feeds
  • assignments
    assignments
  • settings
  • logout

Multi-armed Bandit with Additional Observations.

Donggyu YunAlexandre ProutièreSumyeong AhnJinwoo ShinYung Yi
Published in: Proc. ACM Meas. Anal. Comput. Syst. (2018)
Keyphrases
  • multi armed bandit
  • multi armed bandits
  • learning algorithm
  • reinforcement learning
  • nearest neighbor