Multi-armed bandits are a quintessential machine learning problem requiring
the balancing of exploration and exploitation. While there has been progress in
developing algorithms with strong theoretical guarantees, there has been less
focus on practical near-optimal finite-time performance. In this paper, we
propose an algorithm for Bayesian multi-armed bandits that utilizes
value-function-driven online planning techniques. Building on previous work on
UCB and Gittins index, we introduce linearly-separable value functions that
take both the expected return and the benefit of exploration into consideration
to perform n-step lookahead. The algorithm enjoys a sub-linear performance
guarantee and we present simulation results that confirm its strength in
problems with structured priors. The simplicity and generality of our approach
makes it a strong candidate for analyzing more complex multi-armed bandit
problems.