What Doubling Tricks Can and Can’t
Do for Multi-Armed Bandits

Abstract

An online reinforcement learning algorithm is anytime if it does not need to know in advance the horizon of the experiment. A well-known technique to obtain an anytime algorithm from any non-anytime algorithm is the “Doubling Trick”. In the contexts of adversary or stochastic multi-armed bandits, the performance of an algorithm is measured by its regret, and we study two families of sequences of growing horizons (geometric and exponential) to generalize previously known results that certain doubling tricks can be used to conserve certain regret bounds. In a broad setting, we prove that a geometric doubling trick can be used to conserve (adversary) bounds in but cannot conserve (stochastic) bounds in . We give insights as to why exponential doubling tricks may be better, as they conserve bounds in , and are close to conserving bounds in .

Keywords:

Multi-Armed Bandits; Reinforcement Learning; Online Learning; Anytime Algorithms; Sequential Learning; Doubling Trick.

1Introduction

Multi-Armed Bandit (MAB) problems are well-studied sequential decision making problems in which an agent repeatedly chooses an action (the “arm” of a one-armed bandit) in order to maximize some total reward [22]. Initial motivation for their study came from the modeling of clinical trials, as early as 1933 with the seminal work of [25]. In this example, arms correspond to different treatments with unknown, random effect. Since then, MAB models have been proved useful for many more applications, that range from cognitive radio [14] to online content optimization (news article recommendation [19], online advertising [10] or A/B Testing [16]), or portfolio optimization [23].

While the number of patients involved in a clinical study (and thus the number of treatments to select) is often decided in advance, in other contexts the total number of decisions to make (the horizon ) is unknown. It may correspond to the total number of visitors of a website optimizing its displays for a certain period of time, or to the number of attempted communications in a smart radio device. Hence in such cases, it is crucial to devise anytime algorithms, that is algorithms that do no rely on the knowledge of this horizon to sequentially select arms. A general way to turn any base algorithm into an anytime algorithm is the use of the so-called Doubling Trick, first proposed by [4], that consists in repeatedly running the base algorithm with increasing horizons. Motivated by the frequent use of this technique and the absence of a generic study of its effect on the algorithm’s efficiency, this paper investigates in details two families of doubling sequences (geometric and exponential), and shows that the former should be avoided for stochastic problems.

More formally, a MAB model is a set of arms, each arm being associated to a (unknown) reward stream . Fix a finite (possibly unknown) horizon. At each time step an agent selects an arm and receives as a reward the current value of the associated reward stream, . The agent’s decision strategy (or bandit algorithm) is such that can only rely on the past observations , on external randomness and (possibly) on the knowledge of the horizon . The objective of the agent is to find an algorithm that maximizes the expected cumulated rewards, where the expectation is taken over the possible randomness used by the algorithm and the possible randomness in the generation of the rewards stream. In the oblivious case, in which the reward streams are independent of the algorithm’s choice, this is equivalent to minimizing the regret, defined as

This quantity, referred to as pseudo-regret in [7], quantifies the difference between the expected cumulated reward of the best fixed action, and that of the strategy . For the general adversarial bandit problem [6], in which the rewards streams are arbitrary (picked by an adversary), a worst-case lower bound has been given. It says that for every algorithm, there exists (stochastic) reward streams such that the regret is larger than [6]. Besides, the algorithm has been shown to have a regret of order .

Much smaller regret may be obtained in stochastic MAB models, in which the reward stream from each arm is assumed to be i.i.d., from some (unknown) distribution , with mean . In that case, various algorithms have been proposed with problem-dependent regret upper bounds of the form where is a constant that only depend on the arms distributions. Different assumptions on the arms distributions lead to different problem-dependent constants. In particular, under some parametric assumptions (e.g., Gaussian distributions, exponential families), asymptotically optimal algorithms have been proposed and analyzed (e.g., [8] or Thompson sampling [1]), for which the constant obtained in the regret upper bound matches exactly that of the lower bound given by [17]. Under the non-parametric assumption that the are bounded in , the regret of the algorithm [5] is of the above form with , where is the mean of the best arm. Like in this last example, all the available constants become very large on “hard” instances, in which some arms are very close to the best arm. On such instances, may be much larger than the worst-case , and distribution-independent guarantees may actually be preferred.

The algorithm, proposed by [2], is the first stochastic bandit algorithm to enjoy a problem-dependent logarithmic regret and to be optimal in a minimax sense, as its regret is proved to be upper bounded by , for bandit models with rewards in . However the corresponding constant is proportional to , where is the minimal gap, which worsen the constant of . Another drawback of is that it is not anytime. These two shortcoming have been overcame recently in two different works. On the one hand, the algorithm [11] is minimax optimal and anytime, but its problem-dependent regret does not improve that of . On the other hand, the algorithm [21] is simultaneously minimax optimal and asymptotically optimal (i.e., it has the best problem-dependent constant ), but it is not anytime. A natural question is thus to know whether a Doubling Trick could overcome this limitation.

This question is the starting point of our comprehensive study of the Doubling Trick: can a single Doubling Trick be used to preserve both problem-dependent (logarithmic) regret and minimax (square-root) regret? We answer this question in the negative, by showing that two different types of Doubling Trick may actually be needed. In this paper, we investigate how algorithms enjoying regret guarantees of the generic form

may be turned into an anytime algorithm enjoying similar regret guarantees with an appropriate Doubling Trick. This does not come for free, and we exhibit a “price of Doubling Trick”, that is a constant factor larger than , referred to as a constant manipulative loss.

Outline The rest of the paper is organized as follows. The Doubling Trick is formally defined in Section 2, along with a generic tool for its analysis. In Section 3, we present upper and lower bounds on the regret of algorithms to which a geometric Doubling Trick is applied. Section 4 investigates regret guarantees that can be obtained for a “faster” exponential Doubling Trick. Experimental results are then reported in Section 5. Complementary elements of proofs are deferred to the appendix.

2Doubling Tricks

The Doubling Trick, denoted by , is a general procedure to convert a (possibly non-anytime) algorithm into an anytime algorithm. It is formally stated below as Algorithm Figure 1 and depends on a non-decreasing diverging doubling sequence (i.e., for ). fully restarts the underlying algorithm at the beginning of each new sequence (at ), and run this algorithm on a sequence of length ().

Figure 1: The Generic Doubling Trick Algorithm, \mathcal{A}' = \ensuremath{\mathcal{DT}}(\mathcal{A}, (T_i)_{i\in\mathbb{N}}).
Figure 1: The Generic Doubling Trick Algorithm, .

Related work. The Doubling Trick is a well known idea in online learning, that can be traced back to [4]. In the literature, the term Doubling Trick usually refers to the geometric sequence , in which the horizon is actually doubling, that was popularized by [9] in the context of adversarial bandits. Specific doubling tricks have also been used for stochastic bandits, for example in the work of [3], which uses the doubling sequence to turn the algorithm into an anytime algorithm.

Elements of regret analysis. For a sequence , with for all , we denote , and is always taken non-zero, (i.e., ). We only consider non-decreasing and diverging sequences (that is, , and for ).

Definition 1: Last Term For a non-decreasing diverging sequence and , we can define by

It is simply denoted when there is no ambiguity (e.g., if the doubling sequence is chosen).

reinitializes its underlying algorithm at each time , and in generality the total regret is upper bounded by the regret on each sequence . By considering the last partial sequence , this splitting can be used to get a generic upper bound by taking into account a larger last sequence (up to ). And for stochastic bandit models, the i.i.d. hypothesis on the rewards streams makes the splitting on each sequence an equality, so we can also get a lower bound by excluding the last partial sequence. Lemma 1 is proved in Appendix A.1.

Lemma 1: Regret Lower and Upper Bounds for For any bandit model and algorithm and horizon , one has the generic upper bound

Under a stochastic bandit model, one has furthermore the lower bound

As one can expect, the key to obtain regret guarantees for a Doubling Trick algorithm is to choose correctly the doubling sequence . Empirically, one can verify that sequences with slow growth gives terrible results, and for example using an arithmetic progression typically gives a linear regret. Building on this result, we will prove that if satisfies a certain regret bound (, , or ) then an appropriate anytime version of with a certain doubling trick can conserve the regret bound, with an explicit constant multiplicative loss . In this paper, we study in depth two families of sequences: first geometric and then exponential growths.

3What the Geometric Doubling Trick Can and Can’t Do

We define geometric doubling sequences, and prove that they can be used to conserve bounds in with but cannot be used to conserve bounds in .

Definition 2: Geometric Growth For , and , the sequence defined by is non-decreasing and diverging, and satisfies

Asymptotically for and , and .

3.1Conserving a Regret Upper Bound with Geometric Horizons

A geometric doubling sequence allows to conserve a minimax bound (i.e., ). It was suggested, for instance, in [9]. We generalize this result in the following theorem, proved in Appendix A.2, by extending it to bounds of the form instead of just , for any and . Note that no distinction is done on the case neither in the expression of the constant loss, nor in the proof.

Theorem 1 If an algorithm satisfies , for , and for , and an increasing function (at ), then the anytime version with the geometric sequence of parameters (i.e., ) with the condition if , satisfies,

with a increasing function , and a constant loss ,

For a fixed and , minimizing does not always give a unique solution. On the one hand, if , there is a unique solution minimizing the term, solution of , but without a closed form if is unknown. On the other hand, for any , the term depending on tends quickly to when increases.

Practical considerations. Empirically, when and are fixed and known, there is no need to minimize jointly. It can be minimized separately by first minimizing , that is by solving numerically (e.g., with Newton’s method), and then by taking large enough so that the other term is close enough to .

For the usual case of and (i.e., bounds in ), the optimal choice of is leading to , and the usual choice of gives (see Corollary 1 in appendix). Any large enough gives similar performance, and empirically is preferred, as most algorithms explore each arm once in their first steps (e.g., for for our experiments).

3.2Problem-Dependent Regret Lower Bound with Geometric Horizons

We observe that the constant loss in Eq. from the previous Theorem 1 blows up when goes to zero, giving the intuition that no geometric doubling trick could be used to preserve a logarithmic bound (i.e., with , ). This is confirmed by the lower bound given below.

Theorem 2 For stochastic models, if satisfies , for and , then the anytime version with the geometric sequence of parameters (i.e., ) satisfies this lower bound for a certain constant ,

Moreover, this implies that , which proves that a geometric sequence cannot be used to conserve a logarithmic regret bound.

If the regret is lower bounded at finite horizon by , without surprise we can prove that a geometric doubling sequence gives a comparable lower bound for the Doubling Trick algorithm (see Th. 5 in Appendix B). But more importantly, we show with Theorem 2 that a geometric sequence cannot be used to conserve a finite-horizon lower bound like .

This special case () is the most interesting, as efficient algorithms for stationary bandits must match the lower bound from [17]: if satisfies both (finite-time) logarithmic lower and upper bound (i.e., if is bounded for large enough), then using a geometric sequence for the doubling trick algorithm is a bad idea as it guarantees a blow up in the regret lower bound, by implying . This result is the reason why we need to study successive horizons growing faster than a geometric sequence (i.e., such that ), like the exponential sequence studied in next Section 4.

3.3Proof of Theorem 2

Let and consider a fixed stochastic bandit problem. The lower bound from Lemma 1 gives

We bound for any , thanks to Definition 2, and we can use the hypothesis on for each regret term.

Let . If we have (see below () for a discussion on the other case), then Lemma 5 (Eq. ) gives as . For lower bounds, there is no need to handle the constants tightly, and we have by hypothesis, so let call this constant , and thus it simplifies to

A sum-integral minoration for the increasing function (as ) gives (if ), and so

For the geometric sequence, we know that , and at so there exists a constant such that for large enough (), and such that . And thus we just proved that there is a constant such that

So this proves that for large enough, with , and so , which also implies that cannot be a .

() If we do not have the hypothesis , the same proof could be done, by observing that from large enough, we have (as soon as , i.e., ), and so the same arguments can be used, to obtain a sum from instead of from . For a fixed , we also have for a (small enough) constant , and thus we obtain the same result.

4What Can the Exponential Doubling Trick Do?

We define exponential doubling sequences, and prove that they can be used to conserve bounds in , unlike the previously studied geometric sequences. Furthermore, we provide elements showing that they may also conserve bounds in or .

Definition 3: Exponential Growth For , and , if , then the sequence defined by is non-decreasing and diverging, and satisfies

Asymptotically for and , and .

4.1Conserving Regret Upper Bound with Exponential Horizons

An exponential doubling sequence allows to conserve a problem-dependent bound on regret (i.e., ). This was already used by [3] in a particular case, and for example more recently by [20]. We generalize this result in the following theorem.

Theorem 3 If an algorithm satisfies , for , , and for , and an increasing function (at ), then the anytime version with the exponential sequence of parameters (i.e., ), satisfies the following inequality,

with an increasing function , and a constant loss ,

This result is as rich as Theorem 1 but for logarithmic bounds (i.e., ), with no loss but a constant . It can also be applied to bounds of the generic form (i.e., with ), but with a significant loss from to , and a constant loss . It is important to notice that for , the loss can be arbitrarily made small (by taking a first step large enough). This observation is encouraging, and let the authors think that a tight upper bound could be proved.

Corollary 2 In particular, order-optimal and optimal algorithms for problem-dependent bound have and , as well as , in which case Theorem 3 gives a simpler bound, as for any , , and so

Remark: the optimal choice of gives a constant loss of , which is twice better than the loss of obtained1 in [3].

We observe that the constant loss from Theorem 3 blows up for (as ), and we give below in Theorem 4 another result toward a more precise lower bound.

4.2Minimax Regret Lower Bound with Exponential Horizons

The lower bound from Theorem 4 below is a second step towards proving or disproving the converse of the lower bound of Theorem 2, for bounds of the form (with or without ). and an exponential sequence. This result, along with the previous one Theorem 3, starts to answer the question of whether an exponential sequence is sufficient to conserve both minimax and problem-dependent guarantees with Doubling Tricks. Theorem 4 is proved in Appendix A.3.

Theorem 4 For stochastic models, if satisfies , for and , then the anytime version with the exponential sequence of parameters , , (i.e., ), satisfies this lower bound for a certain constant ,

If we could just take in the two previous Theorems 3 and 4, both results would match and prove that the exponential doubling trick can indeed be used to conserve minimax regret bounds. But obviously cannot depend on , and even if it can be taken as small as we want, it cannot be taken to (it has to be constant when ).

4.3Proof of Theorem 3

Let , and consider a fixed bandit problem. We first consider the harder case of , see below in for the other case. The lower bound from Lemma 1 gives

We bound naively , and we can use the hypothesis on for each regret term, as and are non-decreasing for (by hypothesis for and by Lemma 6.

The first part is denoted and is dealt with Lemma 7: the sum of is a , as by hypothesis, and this sum of is proved below to be bounded by . So . The second part is . Define , so whether or , we always have . Then we can use Lemma 4 (Eq. ) to distribute the power on (as it is ). So with the convention that (even if ), and so this gives

If then the first sum is just which can be included in (still increasing), and so only the second sum has to be bounded, and a geometric sum gives . But if , we can naively bound the first sum by Observe that . So and , thus the first sum is a (as ). In both cases, the first sum can be included in which is still a Another geometric sum bounds the second sum by .

We identify a constant multiplicative loss . The only term left which depends on is , and it can be bounded by using (as ), and again with . The constant part of also gives a term, that can be included in which is still a , and is still increasing as sum of increasing functions. So we can focus on the last term, and we obtain

So the constant multiplicative loss depends on and as well as on , and and is

If , the loss is minimal at and for a minimal value of (for any and ). Finally, the part tends to if so the loss can be made as small as we want, simply by choosing a large enough (but constant w.r.t. ).

Now for the other case of , we can start similarly, but instead of bounding naively by we use Lemma 3 to get a more subtle bound: . The constant term gets included in , and for the non-constant part, is handled similarly. Finally we obtain the loss

5Numerical Experiments

We illustrate here the practical cost of using Doubling Trick, for two interesting non-anytime algorithms that have recently been proposed in the literature: Approximated Finite-Horizon Gittins indexes, that we refer to as , by [18] (for Gaussian bandits with known variance) and by [21] (for Bernoulli bandits).

We first provide some details on these two algorithms, and then illustrate the behavior of Doubling Tricks applied to these algorithms with different doubling sequences.

5.1Two Index-Based Algorithms

We denote by the accumulated rewards from arm , and the number of times arm was sampled. Both algorithms assume to know the horizon . They compute an index for each arm at each time step , and use the indexes to choose the arm with highest index, i.e., (ties are broken uniformly at random).

5.2Experiments

We present some results from numerical experiments on Bernoulli and Gaussian bandits. More results are presented in Appendix E. We present in pages and results for arms and horizon (to ensure that no choice of sequence were lucky and had or too close to it). We ran repetitions of the random experiment, either on the same “easy” bandit problem (with evenly spaced means), or on different random instances sampled uniformly in , and we plot the average regret on simulations. The black line without markers is the (asymptotic) lower bound in , from [17]. We consider for Bernoulli bandits (Figures Figure 2, Figure 3) or for Gaussian bandits (Figures Figure 4, Figure 6),

Each doubling trick algorithm uses the same as a first guess for the horizon. We include both the non-anytime version that knows the horizon , and different anytime versions to compare the choice of doubling trick. To compare against an algorithm that does not need the horizon, we also include by [8] as a baseline for Bernoulli problems, and by [5] (knowing the variance ) for Gaussian problems. The doubling sequences we consider are a geometric one with , and different exponential sequences: the “classical” one with and the “slow” one with both with , and the last one is using , . Despite what was proved theoretically in Theorem 3, using and a large enough improves regarding to using and a leading factor.

Another version of the Doubling Trick with “no restart”, denoted , is presented in Appendix C, but it is only an heuristic and cannot be applied to any algorithm . Algorithm Figure 5 can be applied to or for instance, as they use just as a numerical parameter (see Eqs. Equation 13 and Equation 14), but its first limitation is that it cannot be applied to [13] or [24], or any algorithms based on arm eliminations, for example. A second limitation is the difficulty to analyze this “no restart” variant, due to the unpredictable effect on regret of giving non-uniform prior information to the underlying algorithm on each successive sequence. An interesting future work would be to analyze it, either in general or for a specific algorithm like . Despite its limitations, this heuristic exhibits as expected better empirical performance than , as can be observed in Appendix E.

6Conclusion

We formalized and studied the well-known “Doubling Trick” for generic multi-armed bandit problems, that is used to automatically obtain an anytime algorithm from any non-anytime algorithm. Our results are summarized in Table ?. We show that a geometric doubling can be used to conserve minimax regret bounds (in ), with a constant loss (typically ), but cannot be used to conserve problem-dependent bounds (in ), for which a faster doubling sequence is needed. An exponential doubling sequence can conserve logarithmic regret bounds also with a constant loss, but it is still an open question to know if minimax bounds can be obtained for this faster growing sequence. Partial results of both a lower and an upper bound, for bounds of the generic form , let use believe in a positive answer.

It is still an open problem to know if an anytime algorithm can be both asymptotically optimal for the problem-dependent regret (i.e., with the exact constant) and optimal in a minimax regret (i.e., have a regret), but we believe that using a doubling trick on non-anytime algorithms like cannot be the solution. We showed that it cannot work with a geometric doubling sequence, and conjecture that exponential doubling trick would never bring the right constant either.

Bound Doubling Geometric, Exponential,
Known to fail if . (Theorem 2) Known to work, with loss . (Theorem 3)
Bound Doubling Geometric, Exponential,
Known to work, with loss . (Theorem 1) ? Partial, best known bound is with a loss , if . (Theorems 3, 4)
Bound Doubling Geometric, Exponential,
for both , Known to work, with loss . (Theorem 1 ) ? Partial, best known bound is if , with a loss for . (Theorem 3)

Summary of known positive and negative and partial results ?.

Figure 2: Regret for \mathcal{DT}, for K=9 Bernoulli arms, horizon T=45678, n=1000 repetitions and \boldsymbol{\mu} taken uniformly in [0,1]^K. Geometric doubling (b=2) and slow exponential doubling (b=1.1) are too slow, and short first sequences make the regret blow up in the beginning of the experiment. At t=40000 we see clearly the effect of a new sequence for the best doubling trick (T_i = 200 \times 2^i). As expected, \mathrm{kl}\text{-}\mathrm{UCB}^{++} outperforms \mathrm{kl}\text{-}\mathrm{UCB}, and if the doubling sequence is growing fast enough then \ensuremath{\mathcal{DT}}(\ensuremath{\mathrm{kl}\text{-}\mathrm{UCB}^{++}}) can perform as well as \mathrm{kl}\text{-}\mathrm{UCB}^{++} (see for t < 40000).
Figure 2: Regret for , for Bernoulli arms, horizon , repetitions and taken uniformly in . Geometric doubling () and slow exponential doubling () are too slow, and short first sequences make the regret blow up in the beginning of the experiment. At we see clearly the effect of a new sequence for the best doubling trick (). As expected, outperforms , and if the doubling sequence is growing fast enough then can perform as well as (see for ).
Figure 3: Similarly but for \boldsymbol{\mu} evenly spaced in [0,1]^K (\{0.1,\dots,0.9\}). Both \mathrm{kl}\text{-}\mathrm{UCB} and \mathrm{kl}\text{-}\mathrm{UCB}^{++} are very efficient on easy problems like this one, and we can check visually that they match the lower bound from . As before we check that slow doubling are too slow to give reasonable performance.
Figure 3: Similarly but for evenly spaced in (). Both and are very efficient on “easy” problems like this one, and we can check visually that they match the lower bound from . As before we check that slow doubling are too slow to give reasonable performance.
Figure 4: Regret for K=9 Gaussian arms \mathcal{N}(\mu, 1), horizon T=45678, n=1000 repetitions and \boldsymbol{\mu} taken uniformly in [-5,5]^K and variance V=1. On hard problems like this one, both \mathrm{UCB} and \mathrm{AFHG} perform similarly and poorly w.r.t. to the lower bound from . As before we check that geometric doubling (b=2) and slow exponential doubling (b=1.1) are too slow, but a fast enough doubling sequence does give reasonable performance for the anytime \mathrm{AFHG} obtained by Doubling Trick.
Figure 4: Regret for Gaussian arms , horizon , repetitions and taken uniformly in and variance . On “hard” problems like this one, both and perform similarly and poorly w.r.t. to the lower bound from . As before we check that geometric doubling () and slow exponential doubling () are too slow, but a fast enough doubling sequence does give reasonable performance for the anytime obtained by Doubling Trick.

Note The (Python 3) simulation code used for the experiments is open-sourced at https://GitHub.com/SMPyBandits/SMPyBandits, and documented at https://smpybandits.github.io/.


AOmitted Proofs

We include here the proofs omitted in the main document.

a.1Proof of Lemma 1, “Regret Lower and Upper Bounds for

Let denote . For every ,

Thus, by definition of the regret

In the stochastic case, it is well known that the regret can be rewritten in the following way, introducing the mean of arm and the mean of the best arm:

and the lower bound follows.

a.2Proof of Theorem 1, “Conserving Regret Upper Bound with Geometric Horizons”

It is interesting to note that the proof is valid for both the easiest case when (as it was known in [9] for ) and the generic case when , with no distinction.

As far as the authors know, this result in its generality with is new.

Proof Let , and consider a fixed bandit problem. The upper bound from Lemma 1 gives

We can use the hypothesis on for each regret term, as and are non-decreasing for (by hypothesis for and by Lemma 6.

The first part is denoted , it is an increasing function as a sum of increasing functions, and it is dealt with by using Lemma 7: the sum of is a , as by hypothesis, and this sum of is proved below to be bounded by for a certain constant , which gives . For the second part, we bound thanks to Definition 2. Moreover, as we can use Lemma 4 (Eq. ) to distribute the power on , so (indeed if both sides are equal to ). This gives

If , the last sum is bounded by which is a (as , thanks to a geometric sum), and so it can be included in . If , there is only the first sum. We bound again and use Lemma 5 to bound by term (as by hypothesis).

We split the term in two, and once again, the term with gives a (by a geometric sum), which gets included in . We focus on the fastest term, and we can now rewrite the sum from to ,

We naively bound by , and use a geometric sum to have

Finally, observe that , so the term simplifies, and observe that so the term also simplifies. Thus we get

The constant multiplicative loss depends on and as well as on and , and is .

Minimizing the constant loss? This constant loss has two distinct part, , with depending on , and (equal to if ), and depending on and .

The result from Theorem 1 of course implies the result from [9], in the special case of and (for minimax bounds), as stated numerically in the following Corollary 1.

Corollary 1 If and , the multiplicative loss does not depend on . It is then minimal for and its minimum is . Usually is used, which gives a loss of , close to the optimal value.

In particular, order-optimal and optimal algorithms for the minimax bound have and , for which Theorem 3 gives a simpler bound

a.3Proof of Theorem 4, “Minimax Regret Lower Bound with Exponential Horizons”

Proof Let , and consider a fixed stochastic bandit problem. The lower bound from Lemma 1 gives

We can use the hypothesis on for each regret term, and as , we can use Lemma 4 (Eq. ) to distribute the power on to ease the proof and obtain

Observe that by definition of the exponential sequence (Definition 3), and (Definition 1). For the term, we simply have so if , then we obtain what we want

This lower bound goes from to , and it looks very similar to the upper bound from Theorem 3 where was obtained from .

Remark It does seem sub-optimal to lower bound like this (), but we remind that can be located anywhere in the discrete interval , so in the worst case when is very close to (and for large enough ), we indeed have and , so with this approach, the lower bound cannot be improved.


BMinimax Regret Lower Bound with Geometric Horizons

We include here a last result that partly replies to Theorem 1. It is more subtle that the lower bound in Theorem 2 but still provides an interesting insight: if is not chosen carefully (i.e., if ), then the anytime version of using a geometric Doubling Trick suffers a non-improvable constant multiplicative loss compared to .

Theorem 5 For stochastic models, if satisfies , for and , then the anytime version with the geometric sequence of parameters (i.e., ) satisfies

with , and a constant loss depending only on and ,

is always and tends to for , and some choice of gives .

Proof Let , and consider a fixed stochastic bandit problem. Assume . The lower bound from Lemma 1 gives

We bound for , thanks to Definition 2. and we can use the hypothesis on for each regret term. Additionally, we have by Lemma 4 (Eq. , as and ), thus

We have thanks to a geometric sum (with ) and thus

Thanks to Definition 2, satisfies . Let and , then we have

We obtain as announced, with .

Maximizing the constant loss? To maximize2 , we fix and study the function . The function is of class on and for and for . Moreover has the same sign as . The function is of class , with and , and as , so for all . Thus is decreasing, and . So has a negative sign, and this allows to conclude that is decreasing, and so has no global maximum at fixed , and if .

Relationship with the upper bound. For any , we compare with and we see that, interestingly, , with from Theorem 1. For the particular case of , this lower bound also leads to another interesting remark: if is chosen to minimize the loss in the upper bound (Theorem 1, ), then this lower bound gives , which proves that this choice of geometric doubling trick cannot be used to conserve an optimal algorithm, i.e., the constant loss cannot be made as close to as we want.

CAn Efficient Heuristic, the Doubling Trick with “No Restart”

Let denotes the following Algorithm Figure 5. The only difference with (Algorithm Figure 1) is that the history from all the steps from to is used to reinitialize the new algorithm . To be more precise, this means that a fresh algorithm is created, and then fed with successive observations for all , like if it was playing from the beginning. Note that could have chosen a different path of actions, but we give it the observations from the previous plays of .

This obviously cannot be applied to any kind of algorithm , and for instance any algorithm based on arm elimination (e.g., Explore-Then-Commit approaches like in [12]) will most surely fail with this approach. This second doubling trick algorithm can be applied in practice if is index-based and uses the horizon as a simple numerical parameter in its indexes, like it is the case for instance for (cf. Eq. ). or (cf. Eq. ).

Figure 5: The Non-Restarting Doubling Trick Algorithm, \mathcal{A}' = \ensuremath{\mathcal{DT}_{\text{no-restart}}}(\mathcal{A}, (T_i)_{i\in\mathbb{N}}).
Figure 5: The Non-Restarting Doubling Trick Algorithm, .

However, it is much harder to state any theoretical result on this heuristic . We conjecture that a regret upper bound similar to from Lemma 1 could still be obtained, but it is still an open problem that the authors do not know how to tackle for a generic algorithm. The intuition is that starting with some previous observations from the (same) i.i.d. process can only improve the performance of , and lead to a smaller regret on the interval .


DBasic but Useful Results

All the logarithm are taken in base (i.e., natural logarithm , but is preferred for readability). Logarithms in a basis are denoted , for any , .

We remind that denotes the integer part of , and for , that is the unique integer such that . The only property we use is its definition and the fact that . We also define for , which is the unique integer such that .

d.1Weighted Geometric Inequality

Lemma 2: Weighted Geometric Inequality For any , and , and if is a function of class , non-decreasing and non-negative on , we have

Proof By hypothesis, is non-decreasing, so , and so by using the sum of a geometric sequence, we have

gives the geometric inequality. Note that if we make , the left sum converges to and the right term diverges to , making this inequality completely uninformative.

d.2Another Sum Inequality

This second result is similar to the previous one but for a “doubly exponential” sequence, i.e., , as it also bounds a sum of increasing terms by a constant times its last term.

Lemma 3 For any , , and , we have

Proof We first isolate both the first and last term in the sum and focus on the from sum up to . As the function is increasing for , we use a sum-integral inequality, and then the change of variable of Jacobian , gives

Now for , observe that , and as , we have

Finally, we obtain as desired, .

d.3Basic Functional Inequalities

These functional inequalities are used in the proof of the main theorems.

Lemma 4: Generalized Square Root Inequalities For any and ,

And conversely for any , and , if then

Proof Fix . Let for . First, , and as , is differentiable on , with , and as , , and , so for any . Therefore, is non-increasing, and so , so is non-positive, giving the desired inequality (for any and any ).

The second inequality is a direct application of the first one. Assume , and let , then . This gives .

Lemma 5: Bounding Let and (e.g., ), then

With , it implies that if , satisfy , then for any , we have

Proof Let , defined for . It is of class , and by differentiating, we have as is increasing. So is increasing, and its minimum is attained at , i.e., , which gives Eq. .

The corollary is immediate but stated explicitly for clarity when used in proofs.

Lemma 6 For any and , the function is increasing on .

Proof is of class on . First, if , we have . So if and only if , that it . But and so is always positive, and thus is increasing. Then, if , we have as gives and so .

It is also true if , if not both are zero simultaneously.

d.4Controlling an Unbounded Sum of Dominated Terms

This Lemma is used in the proofs of our upper bounds (Theorems 1 and ?), to handle the sum of terms. In particular, it can be applied to the geometric sequence and or (for Theorem 1) for and ; or the exponential sequence, and (for Theorem 3) for and . Note that it would be obvious if was bounded for , but a more careful analysis has to be given as .

Lemma 7 Let , and be three positive, diverging and non-decreasing functions on , such that for . Let a non-decreasing diverging sequence , and a diverging sequence (i.e., for and if , such that there exists a constant satisfying . Then the (unbounded) sum of dominated terms is still dominated by , i.e.,

Proof By hypothesis, if is dominated by , formally , then there exists a positive function such that and for . Fix , as small as we want, then there exists such that . Let such that . Now for any and large enough so that , we can start to split the sum

The first sum is naively bounded by as is increasing, and for the second sum, for any , and so , thus

The sum is smaller than a sum on a larger interval, as for any , and is increasing so

But now, by hypothesis, and this sum is smaller than also by hypothesis

Finally, we use the hypothesis that is diverging and as and are both fixed, there exists a large enough so that for all And so we have finally proved that

This concludes the proof and shows that as wanted.

EAdditional Experiments

We presents additional experiments, for Gaussian bandits and for the heuristic .

e.1Experiments with Gaussian Bandits (with Known Variance)

We include here another figure for experiments on Gaussian bandits, see Figure 6.

Figure 6: Regret for \mathcal{DT}, for K=9 Gaussian arms \mathcal{N}(\mu, 1), horizon T=45678, n=1000 repetitions and \boldsymbol{\mu} uniformly spaced in [-5,5]^K. On easy problems like this one, both \mathrm{UCB} and \mathrm{AFHG} perform similarly and attain near constant regret (identifying the best Gaussian arm is very easy here as they are sufficiently distinct). Each doubling trick also appear to attain near constant regret, but geometric doubling (b=2) and slow exponential doubling (b=1.1) are slower to converge and thus less efficient.
Figure 6: Regret for , for Gaussian arms , horizon , repetitions and uniformly spaced in . On “easy” problems like this one, both and perform similarly and attain near constant regret (identifying the best Gaussian arm is very easy here as they are sufficiently distinct). Each doubling trick also appear to attain near constant regret, but geometric doubling () and slow exponential doubling () are slower to converge and thus less efficient.

e.2Experiments with

As mentioned previously, the algorithm (Algorithm Figure 5) is only an heuristic so far, as no theoretical guarantee was proved for it. For the sake of completeness, we also include results from numerical experiments with it, to compare its performance against the “with restart” version .

As expected, enjoys much better empirical performance, and in Figs. Figure 7 and Figure 8 we see that a geometric or a slow exponential doubling trick with no restart with can outperform and perform similarly to the non-anytime . But as observed before, the regret blows up after the beginning of each new sequence if the doubling sequence increase too fast (e.g., exponential doubling). Despite what is proved theoretically in Theorem 2, here we observe that the geometric doubling is the only one to be slow enough to be efficient.

Figure 7: Regret for K=9 Bernoulli arms, horizon T=45678, n=1000 repetitions and \boldsymbol{\mu} taken uniformly in [0,1]^K, for \mathcal{DT}_{\text{no-restart}}. Geometric doubling (e.g., b=2) and slow exponential doubling (e.g., b=1.1) are too slow, and short first sequences make the regret blow up in the beginning of the experiment. At t=40000 we see clearly the effect of a new sequence for the best doubling trick (T_i = 200 \times 2^i). As expected, \mathrm{kl}\text{-}\mathrm{UCB}^{++} outperforms \mathrm{kl}\text{-}\mathrm{UCB}, and if the doubling sequence is growing fast enough then \ensuremath{\mathcal{DT}_{\text{no-restart}}}(\ensuremath{\mathrm{kl}\text{-}\mathrm{UCB}^{++}}) can perform as well as \mathrm{kl}\text{-}\mathrm{UCB}^{++}.
Figure 7: Regret for Bernoulli arms, horizon , repetitions and taken uniformly in , for . Geometric doubling (e.g., ) and slow exponential doubling (e.g., ) are too slow, and short first sequences make the regret blow up in the beginning of the experiment. At we see clearly the effect of a new sequence for the best doubling trick (). As expected, outperforms , and if the doubling sequence is growing fast enough then can perform as well as .
Figure 8: K=9 Bernoulli arms with \boldsymbol{\mu} evenly spaced in [0,1]^K. On easy problems like this one, both \mathrm{kl}\text{-}\mathrm{UCB} and \mathrm{kl}\text{-}\mathrm{UCB}^{++} are very efficient, and here the geometric allows the \mathcal{DT}_{\text{no-restart}} anytime version of \mathrm{kl}\text{-}\mathrm{UCB}^{++} to outperform both \mathrm{kl}\text{-}\mathrm{UCB} and \mathrm{kl}\text{-}\mathrm{UCB}^{++}.
Figure 8: Bernoulli arms with evenly spaced in . On easy problems like this one, both and are very efficient, and here the geometric allows the anytime version of to outperform both and .

Footnotes

  1. In [3], the authors obtained a loss of , as the ratio between the constants for the terms, respectively in Th.4.1 and in Th.3.1.
  2. For the largest possible lower bound, we try to maximize the constant loss in the lower bound.

References

  1. Analysis of Thompson sampling for the Multi-Armed Bandit problem.
    S. Agrawal and N. Goyal. In Conference On Learning Theory. PMLR, 2012.
  2. Minimax policies for adversarial and stochastic bandits.
    J-Y. Audibert and S. Bubeck. In Conference on Learning Theory, pages 217–226. PMLR, 2009.
  3. UCB Revisited: Improved Regret Bounds For The Stochastic Multi-Armed Bandit Problem.
    P. Auer and R. Ortner. Periodica Mathematica Hungarica
  4. Gambling in a Rigged Casino: The Adversarial Multi-Armed Bandit Problem.
    P. Auer, N. Cesa-Bianchi, Y. Freund, and R. Schapire. In Annual Symposium on Foundations of Computer Science, pages 322–331. IEEE, 1995.
  5. Finite-time Analysis of the Multi-armed Bandit Problem.
    P. Auer, N. Cesa-Bianchi, and P. Fischer. Machine Learning
  6. The Nonstochastic Multiarmed Bandit Problem.
    P. Auer, N. Cesa-Bianchi, Y. Freund, and R. Schapire. SIAM journal on computing
  7. Regret Analysis of Stochastic and Non-Stochastic Multi-Armed Bandit Problems.
    S. Bubeck, N. Cesa-Bianchi, et al. Foundations and Trends in Machine Learning
  8. Kullback-Leibler upper confidence bounds for optimal sequential allocation.
    O. Cappé, A. Garivier, O-A. Maillard, R. Munos, and G. Stoltz. Annals of Statistics
  9. Prediction, Learning, and Games.
    N. Cesa-Bianchi and G. Lugosi. 2006.
  10. An Empirical Evaluation of Thompson Sampling.
    O. Chapelle and L. Li. In Advances in Neural Information Processing Systems, pages 2249–2257. Curran Associates, Inc., 2011.
  11. Anytime Optimal Algorithms In Stochastic Multi Armed Bandits.
    R. Degenne and V. Perchet. In International Conference on Machine Learning, pages 1587–1595, 2016.
  12. On Explore-Then-Commit Strategies.
    A. Garivier, E. Kaufmann, and T. Lattimore. volume 29 of Advances in Neural Information Processing Systems (NIPS), Barcelona, Spain, 2016.
  13. An Asymptotically Optimal Bandit Algorithm for Bounded Support Models.
    J. Honda and A. Takemura. In Conference on Learning Theory, pages 67–79. PMLR, 2010.
  14. Multi-Armed Bandit Based Policies for Cognitive Radio’s Decision Making Issues.
    W. Jouini, D. Ernst, C. Moy, and J. Palicot. In International Conference Signals, Circuits and Systems. IEEE, 2009.
  15. Thompson Sampling: an Asymptotically Optimal Finite-Time Analysis.
    E. Kaufmann, N. Korda, and R. Munos. pages 199–213. PMLR, 2012.
  16. On the Complexity of A/B Testing.
    E. Kaufmann, O. Cappé, and A. Garivier. In Conference on Learning Theory, pages 461–481. PMLR, 2014.
  17. Asymptotically Efficient Adaptive Allocation Rules.
    T. L. Lai and H. Robbins. Advances in Applied Mathematics
  18. Regret Analysis Of The Finite Horizon Gittins Index Strategy For Multi Armed Bandits.
    T. Lattimore. In Conference on Learning Theory, pages 1214–1245. PMLR, 2016.
  19. A Contextual-Bandit Approach to Personalized News Article Recommendation.
    L. Li, W. Chu, J. Langford, and R. E. Schapire. In International Conference on World Wide Web, pages 661–670. ACM, 2010.
  20. Stochastic Multi-Armed Bandits in Constant Space.
    D. Liau, E. Price, Z. Song, and G. Yang. In International Conference on Artificial Intelligence and Statistics, 2018.
  21. A Minimax and Asymptotically Optimal Algorithm for Stochastic Bandits.
    P. Ménard and A. Garivier. In Algorithmic Learning Theory, volume 76, pages 223–237. PMLR, 2017.
  22. Some Aspects of the Sequential Design of Experiments.
    H. Robbins. Bulletin of the American Mathematical Society
  23. Risk-Aversion In Multi-Armed Bandits.
    A. Sani, A. Lazaric, and R. Munos. In Advances in Neural Information Processing Systems, pages 3275–3283, 2012.
  24. An Improved Parametrization and Analysis of the EXP3++ Algorithm for Stochastic and Adversarial Bandits.
    Y. Seldin and G. Lugosi. In Conference on Learning Theory, volume 65, pages 1–17. PMLR, 2017.
  25. On the Likelihood that One Unknown Probability Exceeds Another in View of the Evidence of Two Samples.
    W. R. Thompson. Biometrika
  26. A framework for Multi-A(rmed)/B(andit) Testing with Online FDR Control.
    F. Yang, A. Ramdas, K. Jamieson, and M. Wainwright. In Advances in Neural Information Processing Systems, pages 5957–5966. Curran Associates, Inc., 2017.