Sample Complexity of Risk-Averse Bandit-Arm Selection

We consider stochastic multiarmed bandit problems where each arm generates i.i.d. rewards according to an unknown distribution. Whereas classical bandit solutions only maximize the expected reward, we consider the problem of minimizing risk using notions such as the value-at-risk, the average value-at-risk, and the mean-variance risk. We present algorithms to minimize the risk over a single and multiple time periods, along with PAC accuracy guarantees given a finite number of reward samples. In the single-period case, we show that finding the arm with least risk requires not many more samples than the arm with highest expected reward. Although minimizing the multiperiod value-at-risk is known to be hard, we present an algorithm with comparable sample complexity under additional assumptions.

By: Jia Yuan Yu, Evdokia Nikolova

Published in: RC25379 in 2013

rc25379.pdf

Questions about this service can be mailed to reports@us.ibm.com .