The Generalized Likelihood Ratio Test meets klUCB: an Improved Algorithm for Piece-Wise Non-Stationary Bandits

Lilian Besson1 and Emilie Kaufmann2
1 Lilian.Besson@CentraleSupelec.fr
CentraleSupélec (campus of Rennes), IETR, SCEE Team,
Avenue de la Boulaie – CS , F- Cesson-Sévigné, France
2 Emilie.Kaufmann@univ-lille.fr
CNRS & Université de Lille, Inria SequeL team
UMR 9189 – CRIStAL, F- Lille, France



This document was generated with the open-source software Engrafo. It is nothing but experimental, please refer to the PDF version of our article instead.

Abstract

We propose a new algorithm for the piece-wise i.i.d. non-stationary bandit problem with bounded rewards. Our proposal, GLR-klUCB, combines an efficient bandit algorithm, klUCB, with an efficient, parameter-free, change-point detector, the Bernoulli Generalized Likelihood Ratio Test, for which we provide new theoretical guarantees of independent interest. We analyze two variants of our strategy, based on local restarts and global restarts, and show that their regret is upper-bounded by if the number of change-points is unknown, and by if is known. This improves the state-of-the-art bounds, as our algorithm needs no tuning based on knowledge of the problem complexity other than . We present numerical experiments showing that GLR-klUCB outperforms passively and actively adaptive algorithms from the literature, and highlight the benefit of using local restarts.

Keywords: Multi-Armed Bandits; Change Point Detection; Non-Stationary Bandits

1 Introduction

Multi-Armed Bandit (MAB) problems form a well-studied class of sequential decision making problems, in which an agent repeatedly chooses an action or “arm” –in reference to the arm of a one-armed bandit– among a set of arms. In the most standard version of the stochastic bandit model, each arm is associated with an i.i.d. sequence of rewards that follow some distribution of mean . Upon selecting arm , the agent receives the reward associated to the chosen arm, and her goal is to adopt a sequential sampling strategy that maximize the expected sum of these rewards. This is equivalent to minimizing the regret, defined as the difference between the total reward of the oracle strategy always selecting the arm with largest mean, , and that of our strategy: .

Regret minimization in stochastic bandits has been extensively studied since the works of [H. Robbins (1952)] and [T. L. Lai and H. Robbins (1985)], and several algorithms with a problem-dependent regret upper bound have been proposed (see, e.g., [T. Lattimore and C. Szepesvári (2019)] for a survey). Among those, the klUCB algorithm (Cappé et al., 2013) has been shown to be asymptotically optimal for Bernoulli distributions (in that it exactly matches the lower bound given by Lai and Robbins (1985)) and can also be employed when the rewards are assumed to be bounded in . Problem-independent upper bounds of the form (with no hidden constant depending on the arms distributions) have also been established for stochastic algorithms, like MOSS, or klUCB-Switch by Garivier et al. (2018), while klUCB is known to enjoy a sub-optimal problem-independent regret.

Stochastic bandits were historically introduced as a simple model for clinical trials, where arms correspond to some treatments with unknown efficacy (Thompson, 1933). More recently, MAB models have been proved useful for different applications, like cognitive radio, where arms can model the vacancy of radio channels, or parameters of a dynamically configurable radio hardware (Maghsudi and Hossain, 2016; Bonnefoi et al., 2017; Kerkouche et al., 2018). Another application is the design of recommender systems, where arms model the popularity of different items (e.g., news recommendation, Li et al. (2010)).

For both cognitive radio and recommender systems, the assumption that the arms distribution do not evolve over time may be a big limitation. Indeed, in cognitive radio new devices can enter or leave the network, which impacts the availability of the radio channel they use to communicate; whereas in online recommendation, the popularity of items is also subject to trends. Hence, there has been some interest on how to take those non-stationary aspects into account within a multi-armed bandit model.

A first possibility to cope with non-stationary is to model the decision making problem as an adversarial bandit problem (Auer et al., 2002b). Under this model, rewards are completely arbitrary and are not assumed to follow any probability distribution. For adversarial environments, the pseudo-regret, which compares the accumulated reward of a given strategy with that of the best fixed-arm policy, is often studied. The pseudo regret of the EXP3 algorithm has been shown to be , which matches the lower bound given by Auer et al. (2002b). However, this model is a bit too general for the considered applications, where reward distributions do not necessarily vary at every round. For these reasons, an intermediate model, called the piece-wise stationary MAB, has been introduced by Kocsis and Szepesvári (2006) and Yu and Mannor (2009). In this model, described in full details in Section 2, the (random) reward of arm at round has some mean , that is constant on intervals between two breakpoints, and the regret is measured with respect to the current best arm .

In this paper, we propose a new algorithm for the piece-wise stationary bandit problem with bounded rewards, called GLR-klUCB. Like previous approaches – CUSUM (Liu et al., 2018) and M-UCB (Cao et al., 2019) – our algorithm relies on combining a standard multi-armed bandit algorithm with a change-point detector. For the bandit component, we propose the use of the klUCB algorithm, that is known to outperform UCB1 (Auer et al., 2002a) used in previous works. For the change-point detector, we propose the Bernoulli Generalized Likelihood Ratio Test (GLRT), for which we provide new non-asymptotic properties that are of independent interest. This choice is particularly appealing because unlike previous approaches, the Bernoulli GLRT is parameter-free: it does not need the tuning of a window size ( in M-UCB), or the knowledge of a lower bound on the magnitude of the smallest change ( in CUSUM).

In this work we jointly investigate, both in theory and in practice, two possible combinations of the bandit algorithm with a change-point detector, namely the use of local restarts (resetting the history of an arm each time a change-point is detected on that arm) and global restarts (resetting the history of all arms once a change-point is detected on one of them). We provide a regret upper bound scaling in for both versions of GLR-klUCB, matching existing results (when is known). Our numerical simulations reveal that using local restart leads to better empirical performance, and show that our approach often outperforms existing competitors.

The article is structured as follows. We introduce the model and review related works in Section 2. In Section 3, we study the Generalized Likelihood Ratio test (GLRT) as a Change-Point Detector (CPD) algorithm. We introduce the two variants of GLR-klUCB algorithm in Section 4, where we also present upper bound on the regret of each variant. The unified regret analysis for these two algorithms is sketched in Section 5. Numerical experiments are presented in Section 6, with more details in the Appendix.

2 The Piece-Wise Stationary Bandit Setup and Related Works

A piece-wise stationary bandit model is characterized by a set of arms. A (random) stream of rewards is associated to each arm . We assume that the rewards are bounded, and without loss of generality we assume that . We denote by the mean reward of arm at round . At each round , a decision maker has to select an arm , based on past observation and receives the corresponding reward . At time , we denote by an arm with maximal expected reward, i.e., , called an optimal arm (possibly not unique).

A policy chooses the next arm to play based on the sequence of past plays and obtained rewards. The performance of is measured by its (piece-wise stationary) regret, the difference between the expected reward obtained by an oracle policy playing an optimal arm at time , and that of the policy :

(1)

In the piece-wise i.i.d. model, we furthermore assume that there is a (relatively small) number of breakpoints, denoted by . We define the -th breakpoint by . Hence for , the rewards associated to each arms are i.i.d.. Note than when a breakpoint occurs, we do not assume that all the arms means change, but that there exists an arm whose mean has changed. Depending on the application, many scenario can be meaningful: changes occurring on all arms simultaneously (due to some exogenous event), or only a few arms change at each breakpoint. Introducing the number of change-points on arm , defined as , it clearly holds that , but there can be an arbitrary difference between these two quantities for some arms. Letting be the total number of change-points on the arms, one can have .

The piece-wise stationary bandit model can be viewed as an interpolation between stationary and adversarial models, as the stationary model corresponds to , while the adversarial model can be considered as a special (worst) case, with . However, analyzing an algorithm for the piece-wise stationary bandit model requires to assume a small number of changes, typically .

Related work.

The piece-wise stationary bandit model was first studied by Kocsis and Szepesvári (2006); Yu and Mannor (2009); Garivier and Moulines (2011). It is also known as switching (Mellor and Shapiro, 2013) or abruptly changing stationary (Wei and Srivastava, 2018) environment. To our knowledge, all the previous approaches combine a standard bandit algorithm, like UCB, Thompson Sampling or EXP3, with a strategy to account for changes in the arms distributions. This strategy often consists in forgetting old rewards, to efficiently focus on the most recent ones, more likely to be similar to future rewards. We make the distinction between passively and actively adaptive strategies.

The first proposed mechanisms to forget the past consist in either discounting rewards (at each round, when getting a new reward on an arm, past rewards are multiplied by if that arm was not seen since times, for a discount factor ), or using a sliding window (only the rewards gathered in the last observations of an arm are taken into account, for a window size ). Those strategies are passively adaptive as the discount factor or the window size are fixed, and can be tuned as a function of and to achieve a certain regret bound. Discounted UCB (D-UCB) was proposed by Kocsis and Szepesvári (2006) and analyzed by Garivier and Moulines (2011), who prove a regret bound, if . The same authors proposed the Sliding-Window UCB (SW-UCB) and prove a regret bound, if .

More recently, Raj and Kalyani (2017) proposed the Discounted Thompson Sampling (DTS) algorithm, which performs well in practice with . However, no theoretical guarantees are given for this strategy, and our experiments did not really confirm the robustness to . The RExp3 algorithm (Besbes et al., 2014) can also be qualified as passively adaptive: it is based on (non-adaptive) restarts of the EXP3 algorithm. Note that this algorithm is introduced for a different setting, where the quantity of interest is not but a quantity called the total variational budget (satisfying with the minimum magnitude of a change-point). A regret bound is proved, which is weaker than existing results in our setting. Hence we do no include this algorithm in our experiments.

The first actively adaptive strategy is Windowed-Mean Shift (Yu and Mannor, 2009), which combines any bandit policy with a change point detector which performs adaptive restarts of the bandit algorithm. However, this approach is not applicable to our setting as it takes into account side observations. Another line of research on actively adaptive algorithms uses a Bayesian point of view. A Bayesian Change-Point Detection (CPD) algorithm is combined with Thompson Sampling by Mellor and Shapiro (2013), and more recently in the Memory Bandit algorithm of Alami and Féraud (2017). Both algorithms do not have theoretical guarantees and their implementation is very costly, hence we do not include them in our experiments. Our closest competitors rather use frequentist CPD algorithms (see, e.g. Basseville and Nikiforov (1993)) combined with a bandit algorithm. The first algorithm of this flavor, Adapt-EVE algorithm (Hartland et al., 2006) uses a Page-Hinkley test and the UCB policy, but no theoretical guarantee are given. EXP3.R (Allesiardo and Féraud, 2015; Allesiardo et al., 2017) combines a CPD with EXP3, and the history of all arms are reset as soon as a a sub-optimal arm is detecting to become optimal and it achieve a regret. This is weaker than the regret achieved by two recent algorithms, CUSUM-UCB (Liu et al., 2018) and Monitored UCB (M-UCB, Cao et al. (2019)).

is based on a rather complicated two-sided CUSUM test, that uses the first samples from one arm to compute an initial average, and then detects whether a drift of size larger than occurred from this value by checking whether a random walk based on the remaining observations crosses a threshold . Thus it requires the tuning of three parameters, , and . CUSUM-UCB performs local restarts using this test, to reset the history of one arm for which the test detects a change. M-UCB uses a much simpler test, based on the most recent observations from an arm: a change is detected if the absolute difference between the empirical means of the first and second halves of those observations exceeds a threshold . So it requires the tuning of two parameters, and . M-UCB performs global restarts using this test, to reset the history of all arms whenever the test detects a change on one of the arms. Compared to CUSUM-UCB, note that M-UCB is numerically much simpler as it only uses a bounded memory, of order for arms.

Advantages of our approach. CUSUM-UCB and M-UCB are both analyzed under some reasonable assumptions on the problem parameters –the means – mostly saying that the breakpoints are sufficiently far away from each other. However, the proposed guarantees only hold for parameters tuned using some prior knowledge of the means. Indeed, while in both cases the threshold can be set as a function of the horizon and the number of breakpoints (also needed by previous approaches to obtain the best possible bounds), the parameter for CUSUM and for M-UCB require the knowledge of the smallest magnitude of a change-point. In this paper, we propose the first algorithm that does not require this knowledge, and still attains a regret. Moreover we propose the first comparison of the use of local and global restarts within an adaptive algorithm, by studying two variants of our algorithm. This study is supported by both theoretical and empirical results. Finally, on the practical side, while we can note that the proposed GLR test is more complex to implement than the test used by M-UCB, we propose two heuristics to speed it up while not losing much in terms of regret.

3 The Bernoulli GLR Change Point Detector

Sequential change-point detection has been extensively studied in the statistical community (see, e.g., Basseville and Nikiforov (1993) for a survey). In this article, we are interested in detecting changes on the mean of a probability distribution with bounded support. Assuming that we collect independent samples all from some distribution supported in . We want to discriminate between two possible scenarios: all the samples come from distributions that have a common mean , or there exists a change-point such that have some mean and have a different mean . A sequential change-point detector is a stopping time with respect to the filtration such that means that we reject the hypothesis .

Generalized Likelihood Ratio tests date back to the seminal work of Barnard (1959) and were for instance studied for change-point detection by Siegmund and Venkatraman (1995). Exploiting the fact that bounded distribution are -sub Gaussian (i.e., their moment generating function is dominated by that of a Gaussian distribution with the same mean and a variance ), the (Gaussian) GLRT, recently studied in depth by Maillard (2019), can be use for our problem. We propose instead to exploit the fact that bounded distributions are also dominated by Bernoulli distributions. We call a sub-Bernoulli distribution any distribution that satisfies with and is the log moment generating function of a Bernoulli distribution with mean . Lemma 1 of Cappé et al. (2013) establishes that any bounded distribution supported in is a sub-Bernoulli distribution.

3.1 Presentation of the test

If the samples were all drawn from a Bernoulli distribution, our change-point detection problem would reduce to a parametric sequential test of against the alternative . The Generalized Likelihood Ratio statistic for this test is defined by

where and denote the likelihoods of the first observations under a model in and . High values of this statistic tend to indicate rejection of . Using the form of the likelihood for Bernoulli distribution, this statistic can be written with the binary relative entropy ,

(2)

Indeed, one can show that , where for , denotes the average of the observations collected between the instants and . This motivates the definition of the Bernoulli GLR change point detector.

Definition

The Bernoulli GLR change point detector with threshold function is

(3)

Asymptotic properties of the GLR for change-point detection have been studied by Lai and Xing (2010) for Bernoulli distributions and more generally for one-parameter exponential families, for which the GLR test is defined as in (3) but with replaced by the Kullback-Leibler divergence between two elements in that exponential family that have mean and . For example, the Gaussian GLR studied by Maillard (2019) corresponds to (3) with when the variance is set to , and non-asymptotic properties of this test are given for any -subGaussian samples.

In the next section, we provide new non-asymptotic results about the Bernoulli GLR test under the assumption that the samples come from a sub-Bernoulli distribution, which holds for any distribution supported in . Note that Pinsker’s inequality gives that , hence the Bernoulli GLR may stop earlier that the Gaussian GLR based on the quadratic divergence .

3.2 Properties of the Bernoulli GLR

In Lemma 3.2 below, we propose a choice of the threshold function under which the probability that there exists a false alarm under i.i.d. data is small. To define , we need to introduce the function ,

(4)

where for we define and its inverse . And for any , if and otherwise. The function is easy to compute numerically. Its use for the construction of concentration inequalities that are uniform in time is detailed in Kaufmann and Koolen (2018), where tight upper bound on the function are also given: for and when is large. The proof of Lemma 3.2, that actually holds for any sub-Bernoulli distribution, is given in Appendix C.1.

Lemma

Assume that there exists such that and that for all . Then the Bernoulli GLR test satisfies with the threshold function

(5)

Another key feature of a change-point detector is its detection delay under a model in which a change from to occurs at time . We already observed that from Pinsker’s inequality, the Bernoulli GLR stops earlier than a Gaussian GLR. Hence, one can leverage some techniques from Maillard (2019) to upper bound the detection delay of the Bernoulli GLR. Letting , one can essentially establish that for larger than (i.e., enough samples before the change), the delay can be of the same magnitude (i.e., enough samples after the change). In our bandit analysis to follow, the detection delay will be crucially used to control the probability of the good event (in Lemma D.1 and E.1).

3.3 Practical considerations

Lemma 3.2 provides the first control of false alarm for the Bernoulli GLR employed for bounded data. However, the threshold (5) is not fully explicit as the function can only be computed numerically. Note that for sub-Gaussian distributions, results from Maillard (2019) show that the smaller and more explicit threshold , can be used to prove an upper bound of for the false alarm probability of the GLR, with quadratic divergence . For the Bernoulli GLR, numerical simulations suggest that the threshold (5) is a bit conservative, and in practice we recommend to keep only the leading term and use .

Also note that, as any test based on scan-statistics, the GLR can be costly to implement as at every time step, it considers all previous time steps as a possible positions for a change-point. Thus, in practice the following adaptation may be interesting, based on down-sampling the possible time steps:

(6)

for subsets and . Following the proof of Lemma 3.2, we can easily see that this variant enjoys the exact same false-alarm control. However, the detection delay may be slightly increased. In Appendix F.1 we show that using these practical speedups has little impact on the regret.

4 The GLR-klUCB Algorithm

Our proposed algorithm, GLR-klUCB, combines a bandit algorithm with a change-point detector running on each arm. It also needs a third ingredient, some forced exploration parameterized by to ensure each arm is sampled enough and changes can also be detected on arms currently under-sampled by the bandit algorithm. GLR-klUCB combines the klUCB algorithm (Cappé et al., 2013), known to be optimal for Bernoulli bandits, with the Bernoulli GLR change-point detector introduced in Section 3. This algorithm, formally stated as Algorithm 1, can be used in any bandit model with bounded rewards, and is expected to be very efficient for Bernoulli distributions, which are relevant for practical applications.

0:  Problem parameters: , .
0:  Algorithm parameters: exploration probability , confidence level .
0:  Option: Local or Global restart.
1:  Initialization: , and
2:  for all  do
3:     if  and  then
4:         . (forced exploration)
5:     else
6:          as defined in (7)
7:     end if
8:     Play arm and receive the reward : .
9:     if  = True then
10:         if Global restart then
11:             and .  (global restart)
12:         else
13:             and . (local restart)
14:         end if
15:     end if
16:  end for
Algorithm 1 GLR-klUCB, with Local or Global restarts

The GLR-klUCB algorithm can be viewed as a klUCB algorithm allowing for some restarts on the different arms. A restart happens when the Bernoulli GLR change-point detector detects a change on the arm that has been played (line ). To be fully specific, if and only if

with defined in (5), or , as recommended in practice, see Section 3.3. We define the (klUCB-like) index used by our algorithm, by denoting the last restart that happened for arm before time , the number of selections of arm , and their empirical mean (if ). With the exploration function (if else ), the index is defined as

(7)

In this work, we simultaneously investigate two possible behavior: global restart (reset the history of all arms once a change was detected on one of them, line ), and local restart (reset only the history of the arm on which a change was detected, line ), which are the two different options in Algorithm 1. Under local restart, in the general case the times are not equal for all arms, hence the index policy associated to (7) is not a standard UCB algorithm, as each index uses a different exploration rate. One can highlight that in the CUSUM-UCB algorithm, which is the only existing algorithm based on local restart, the UCB index are defined differently1: is replaced by with .

The forced exploration scheme used in GLR-klUCB (lines -) generalizes the deterministic exploration scheme proposed for M-UCB by (Cao et al., 2019), whereas CUSUM-UCB performs randomized exploration. A consequence of this forced exploration is given in Proposition 4 (proved in Appendix B).

Proposition

For every pair of instants between two restarts on arm (i.e., for a , one has ) it holds that .

4.1 Results for GLR-klUCB using Global Changes

Recall that denotes the position of the -th break-point and let be the mean of arm on the segment between the and -th breakpoint: . We also introduce and the largest gap at break-point as .

Assumption 4.1 below is easy to interpret and standard in non-stationary bandits. It requires that the distance between two consecutive breakpoints is large enough: how large depends on the magnitude of the largest change that happen at those two breakpoints. Under this assumption, we provide in Theorem 4.1 a finite time problem-dependent regret upper bound. It features the parameters and , the KL-divergence terms expressing the hardness of the (stationary) MAB problem between two breakpoints, and the terms expressing the hardness of the change-point detection problem.

Assumption

Define Then we assume that for all , .

Theorem

For and for which Assumption 4.1 is satisfied, the regret of GLR-klUCB with parameters and based on Global Restart satisfies

Corollary

For “easy” problems satisfying the corresponding Assumption 4.1, let denote the smallest value of a sub-optimality gap on one of the stationary segments, and be the smallest magnitude of any change point on any arm.

  1. Choosing , gives ,

  2. Choosing , gives .

4.2 Results for GLR-klUCB using Local Changes

A few new notation are needed to state a regret bound for GLR-klUCB using local changes. We let denote the position of the -th change point for arm : , with the convention , and let be the -th value for the mean of arm , such that . We also introduce the gap .

Assumption 4.2 requires that any two consecutive change-points on a given arm are sufficiently spaced (relatively to the magnitude of those two change-points). Under that assumption, Theorem 4.2 provides a regret upper bound that scales with similar quantities as that of Theorem 4.1, except that the number of breakpoints is replaced with the total number of change points .

Assumption

Define We assume that for all arm and all , .

Theorem

For and for which Assumption 4.2 is satisfied, the regret of GLR-klUCB with parameters and based on Local Restart satisfies

where .

Corollary

For “easy” problems satisfying the corresponding Assumption 4.2, with and defined as in Corollary 4.1, the following holds.

  1. Choosing , gives ,

  2. Choosing , gives .

4.3 Interpretation

Theorems 4.1 and 4.2 both show that there exists a tuning of and as a function and and the number of changes such that the regret is of order and respectively, where the notations ignore the gap terms. For very particular instances such that , i.e., at each break-point only one arm changes (e.g., problem in Section 6), the theory advocates the use of local changes. Indeed, while the regret guarantees obtained are similar, those obtained for local changes hold for a wider variety of problems as Assumption 4.2 is less stringent than 4.1. Besides those specific instances, our results are essentially worse for local than global changes. However, we only obtain regret upper bounds – thus providing a theoretical safety net for both variants of our algorithm, and the practical story is different, as discussed in Section 6. We find that GLR-klUCB performs better with local restarts.

One can note that with the tuning of and prescribed by Corollaries 4.1 and 4.2, our regret bounds hold for problem instances for which two consecutive breakpoints (or change-points on an arm) are separated by more than time steps. Hence those guarantees are valid on “easy” problem instances only, with few changes of a large magnitude (e.g., not for problems or ). However, this does not prevent our algorithms from performing well on more realistic instances, and numerical experiments support this claim. Note that M-UCB (Cao et al., 2019) is also analyzed for the same type of unrealistic assumptions, while its practical performance is illustrated beyond those.

5 A Unified Regret Analysis

In this section, we sketch a unified proof for Theorem 4.1 and 4.2, whose detailed proofs can be found in Appendix D and E respectively. We emphasize that our approach is significantly different from those proposed by Cao et al. (2019) for M-UCB and by Liu et al. (2018) for CUSUM-UCB.

Recall that the regret is defined as . Introducing the (deterministic) set of time steps at which the forced exploration is performed before time (see lines - in Algorithm 1), one can write, using notably that , due to the bounded rewards,

Introducing some good event to be specified in each case, one can write

Each analysis requires to define an appropriate good event, stating that some change-points are detected within a reasonable delay. Each regret bound then follows from upper bounds on term (A), term (B), and on the failure probability . To control (A) and (B), we split the sum over consecutive segments for global changes and for each arm for local changes, and use elements from the analysis of klUCB of Cappé et al. (2013).

The tricky part of each proof, which crucially exploits Assumption 4.1 or 4.2, is actually to obtain an upper bound on . For example for local changes (Theorem 4.2), the good event is defined as

where is defined as the -th change detected by the algorithm on arm and is defined in Assumption 4.2. Introducing the event that all the changes up to the -th have been detected, a union bound yields the following decomposition:

Term (a) is related to the control of probability of false alarm, which is given by Lemma 3.2 for a change-point detector run in isolation. Observe that under the bandit algorithm, the change point detector associated to arm is based on (possibly much) less than samples from arm , which makes false alarm even less likely to occur. Hence, it is easy to show that .

Term (b) is related to the control of the detection delay, which is more tricky to obtain under the GLR-klUCB adaptive sampling scheme, when compared to a result like Theorem 6 in Maillard (2019) for the change-point detector run in isolation. More precisely, we need to leverage the forced exploration (Proposition 4) to be sure we have enough samples for detection. This explains why delays defined in Assumption 4.2 are scaled by . Using some elementary calculus and a concentration inequality given in Lemma C.2, we can finally prove that . Finally, the “bad event” is unlikely:

6 Experimental Results

In this section we report results of numerical simulations performed on synthetic data to compare the performance of GLR-klUCB against other state-of-the-art approaches, on some piece-wise stationary bandit problems. For simplicity, we restrict to rewards generated from Bernoulli distributions, even if GLR-klUCB can be applied to any bounded distributions.

Algorithms and parameters tuning.

We include in our study two algorithms designed for the classical MAB, klUCB (Garivier and Cappé, 2011) and Thompson sampling (Agrawal and Goyal, 2012; Kaufmann et al., 2012), as well as an “oracle” version of klUCB, that we call Oracle-Restart. This algorithm knows the exact locations of the breakpoints, and restarts klUCB at those locations (without any delay).

Then, we compare our algorithms to several competitor designed for a piece-wise stationary model. For a fair comparison, all algorithms that use UCB as a sub-routine were adapted to use klUCB instead, which yields better performance2. Moreover, all the algorithms are tuned as described in the corresponding paper, using in particular the knowledge of the number of breakpoints and the horizon . We first include three passively adaptive algorithms: Discounted klUCB (D-klUCB, Kocsis and Szepesvári (2006)), with discount factor ; Sliding-Window klUCB (SW-klUCB, Garivier and Moulines (2011)) using window-size and Discounted Thompson sampling (DTS, Raj and Kalyani (2017)) with discount factor . For this last algorithm, the discount factor suggested by the authors was performing significantly worse on our problem instances.

Our main goal is to compare against actively adaptive algorithms. We include CUSUM-klUCB (Liu et al., 2018), tuned with and for easy problems () and for hard problems (), and with , , as suggested in the paper. Finally, we include M-klUCB (Cao et al., 2019), tuned with , based on a prior knowledge of the problems as the formula using given in the paper is too large for small horizons (on all our problem instances), a threshold and as suggested by Remark 4 in the paper.

For GLR-klUCB, we explore the two different options with Local and Global restarts, using respectively , and , from Corollaries 4.1 and 4.2. The constant is set to (we show in Appendix G a certain robustness, with similar regret as soon as ). To speed up our simulations, two optimizations are used, with , and CUSUM also uses the first trick with (see Appendix F.1 for more details).

Results.

We report results obtained on three different piece-wise stationary bandit problems, illustrated in Figure 1 in Appendix A, and described in more details below. Additional results on two more problems can be found in Appendix H, where additional regret curves can be found for all problems. For each experiment, the regret was estimated using independent runs.

Problem . There are arms changing times until . The arm means are shown in Fig. 1. Note that changes happen on only one arm (i.e., ), and the optimal arm changes once at , with a large gap .

Problem . (see Fig. 1). This problem is close to Problem 1, with a minimum optimality gap of . However, all arms change at every breakpoint (i.e., ), with identical gap . The first optimal arm decreases at every change ( with ), and one arm stays the worst ( with ).

Problem . (see Fig. 1) This problem is harder, with , and . Most arms change at almost every time steps, and means are bounded in . The gaps are much smaller than for the first problems, with amplitudes ranging from to . Note that the assumptions of our regret upper bounds are violated, as well as the assumptions for the analysis of M-UCB and CUSUM-UCB. This problem is inspired from Figure 3 of Cao et al. (2019), where the synthetic data was obtained from manipulations on a real-world database of clicks from Yahoo!.

Table 1 shows the final regret obtained for each algorithm. Results highlighted in bold show the best non-oracle algorithm for each experiments, with our proposal being the best non-oracle strategy for problems and . Thompson sampling and klUCB are efficient, and better than Discounted-klUCB which is very inefficient. DTS and SW-klUCB can sometimes be more efficient than their stationary counterparts, but perform worse than the Oracle and most actively adaptive algorithms. M-klUCB and CUSUM-klUCB outperform the previous algorithms, but GLR-klUCB is often better. On these problems, our proposal with Local restarts is always more efficient than with Global restarts. Note that on problem 2, all means change at every breakpoint, hence one could expect global restart to be more efficient, yet our experiments show the superiority of local restarts on every instances.

Algorithms \Problems Pb Pb Pb
Oracle-Restart klUCB
klUCB
Discounted-klUCB
SW-klUCB
Thompson sampling
DTS
M-klUCB
CUSUM-klUCB
GLR-klUCB(Local)
GLR-klUCB(Global)
Table 1: Mean regret std-dev, for different algorithms on problems , (with ) and ().

One can note that the best non-oracle strategies are actively adaptive, thus our experiments confirm that an efficient bandit algorithm (e.g., klUCB) combined with an efficient change point detector (e.g., GLR) provides efficient strategies for the piece-wise stationary model.

7 Conclusion

We proposed a new algorithm for the piece-wise stationary bandit problem, GLR-klUCB, which combines the klUCB algorithm with the Bernoulli GLR change-point detector. This actively adaptive method attains state-of-the-art regret upper-bounds when tuned with a prior knowledge of the number of changes , but without any other prior knowledge on the problem, unlike CUSUM-UCB and M-UCB that require to know a lower bound on the smallest magnitude of a change. We also gave numerical evidence of the efficiency of our proposal.

We believe that our new proof technique could be used to analyze GLR-klUCB under less stringent assumptions than the one made in this paper (and in previous work), that would require only a few “meaningful” changes to be detected. This interesting research direction is left for future work, but the hope is that the regret would be expressed in term of this number of meaningful changes instead of . We shall also investigate whether actively adaptive approaches can attain a regret upper-bound without the knowledge of . Finally, we would like to study in the future possible extension of our approach to the slowly varying model (Wei and Srivastava, 2018).

Acknowledgment

Thanks to Odalric-Ambrym Maillard at Inria Lille for useful discussions, and thanks to Christophe Moy at University Rennes 1. Open Source: the simulation code used for the experiments is using Python 3. It is open-sourced at GitHub.com/SMPyBandits/SMPyBandits and fully documented at SMPyBandits.GitHub.io, for more details see the companion article (Besson, 2018). The page SMPyBandits.GitHub.io/NonStationaryBandits.html explains how to reproduce the experiments used for this article.

Appendix A Illustrations of our Benchmark

(a) Problem : arms with , and changes occur on only one arm at a time (i.e., ).
(b) Problem : arms with , and changes occur on all arms (i.e., ).
(c) Problem : arms with , changes occur on most arms at a time ().
Figure 1: History of means of arms for three problems, for and .

Appendix B Proof of Proposition 4

We consider one arm , and when the GLR-klUCB algorithm is running, we consider two time steps , chosen between two restart times for that arm . Lines - state that if (see Algorithm 1 for details). Thus we have

Hence we have the result of Proposition 4.

Appendix C Concentration Inequalities

c.1 Proof of Lemma 3.2

Lemma 3.2 is presented for bounded distributions and is actually valid for any sub-Bernoulli distribution. It could also be presented for more general distributions satisfying

(8)

where is the log moment generating of some one-dimensional exponential family. The Bernoulli divergence would be replaced by the corresponding divergence in that exponential family (which is the Kullback-Leibler divergence between two distributions of means and ).

Let’s go back to the Bernoulli case with divergence given in (2). A first key observation is

Hence the probability of a false alarm occurring is upper bounded as

where and are the empirical means of respectively and i.i.d. observations with mean and distribution , that are independent from the previous ones. As is sub-Bernoulli, the conclusion follows from Lemma C.1 below and from the definition of :

And so we have .

Lemma

and two independent i.i.d. processes with resp. means and such that

where is the moment generating function of the distribution , which is the unique distribution in an exponential family that has mean . Let be the divergence function associated to that exponential family. Introducing the notation and , it holds that for every ,

where is the function defined in (4).

Proof of Lemma c.1.

Using the same construction as in the proof of Theorem 14 in Kaufmann and Koolen (2018), one can prove that for every (for an interval ), there exists a non-negative super-martingale with respect to the filtration that satisfies and

for some function . This super-martingale is of the form

for a well-chosen probability distribution , and the function can be chosen to be any

for a parameter .

Similarly, there exists an independent super-martingale w.r.t. the filtration such that

for the same function . In the terminology of Kaufmann and Koolen (2018), the processes and are called -DCC for Doob-Cramér-Chernoff, as Doob’s inequality can be applied in combination with the Cramér-Chernoff method to obtain deviation inequalities that are uniform in time.

Here we have to modify the technique used in their Lemma 4 in order to take into account the two stochastic processes, and the presence of super-martingales instead of martingales (for which Doob inequality still works). One can write

Using that is a super-martingale with respect to the filtration

one can apply Doob’s maximal inequality to obtain

using that and are independent and have an expectation smaller than .

Putting things together yields

for any function defined above. The conclusion follows by optimizing for both and , using Lemma 18 in Kaufmann and Koolen (2018).

c.2 A Concentration Result Involving Two Arms

The following result is useful to control the probability of the good event in our two regret analyzes. Its proof follows from a straightforward application of the Cramér-Chernoff method (Boucheron et al., 2013).

Lemma

Let be the empirical mean of i.i.d. observations with mean , for , that are -sub-Gaussian. Define . Then for any , we have

Proof of Lemma c.2.

We first note that

(9)

and those two quantities can be upper-bounded similarly using the Cramér-Chernoff method.

Let and be two i.i.d. sequences that are sub-Gaussian with mean and respectively. Let and be two integers and and denote the two empirical means based on observations from , and observations from respectively. Then for every , we have

(using Markov’s inequality)

where the last inequality uses the sub-Gaussian property. Choosing the value which minimizes the right-hand side of the inequality yields

Using this inequality twice in the right hand side of (C.2) concludes the proof.

Appendix D Analysis of GLR-klUCB with Global Changes

We gave in Section 5 the following decomposition of the regret ,

(10)

d.1 Proof of Theorem 4.1.

We first introduce some notation for the proof: let be the -th change detected by the algorithm, leading to the -th (full) restart and let be the last time before that the algorithm restarted. We denote the number of selections of arm since the last (global) restart, and their empirical average (if ).

As explained before, our analysis relies on the general regret decomposition (10), with the following appropriate good event. Let be defined as in Assumption 4.1, we define

(11)

Under the good event, all the change points are detected within a delay at most . Note that from Assumption 4.1, as the period between two changes are long enough, if holds, then for all change , one has . Using Assumption 4.2, one can prove the following.

Lemma

With defined as in (11), the “bad event” is unlikely: .

We now turn our attention to upper bounding the two terms and in (10).

Upper bound on Term

where we introduce the event that all the changes up to the -th have been detected:

(12)

Clearly, and is -measurable. Observe that conditionally to , when holds, is the average of samples that have all mean . Thus, introducing as a sequence of i.i.d. random variables with mean , one can write

where the last but one inequality relies on the concentration inequality given in Lemma 2 of Cappé et al. (2013) and the fact that . Finally, using the law of total expectation yields

(13)

Upper bound on Term .

We let denote the empirical mean of the first observations of arm made after time . Rewriting the sum in as the sum of consecutive intervals ,

Conditionally to , when holds, for , is the empirical mean from i.i.d. observations of mean . Therefore, introducing as a sequence of i.i.d. random variables with mean , it follows from the law of total expectation that

If by definition, we can use the analysis of klUCB from Cappé et al. (2013) to further upper bound the right-most part, and we obtain

(14)

Combining the regret decomposition (10) with Lemma D.1 and the two upper bounds in (13) and (14),

which concludes the proof.

d.2 Controlling the probability of the good event: Proof of Lemma d.1

Recall that defined in (12) is the event that all the breakpoints up to the -th have been correctly detected. Using a union bound, one can write

The final result follows by proving that and , as detailed below.

Upper bound on : controlling the false alarm.

implies that there exists an arm whose associated change point detector has experienced a false-alarm:

with where is an i.i.d. sequence with mean . Indeed, conditionally to , the successive observations of arm arm starting from are i.i.d. with mean . Using Lemma C.1, term is upper bounded by .

Upper bound on term (b): controlling the delay.

From the definition of , there exists an arm such that . We shall prove that it is unlikely that the change-point detector associated to doesn’t trigger within the delay .

First, it follows from Proposition 4 that there exists such that where (as the mapping is non-decreasing, is at and its value at is larger than ). Using that

the event further implies that

where denotes the empirical mean of the first observation of arm since the -th restart and the empirical mean that includes observation number to number . Conditionally to , is the empirical mean of i.i.d. replications of mean , whereas is the empirical mean of i.i.d. replications of mean .

Moreover, due to Proposition 4, lies in the interval . Conditionally to , one obtains furthermore using that – which follows from Assumption 4.1 – that

Introducing (resp. ) the empirical mean of i.i.d. observations with mean (resp. ), such that and are independent, it follows that

where we have also used that .

Using Pinsker’s inequality and introducing the gap (which is such that ), one can write

Using Lemma C.2 (given above in Appendix C.2) and a union bound, the first term in the right hand side is upper bounded by (as ). For the second term, we use the observation

and, using that , one obtains

(15)

Define . Using that the mappings and are respectively decreasing and increasing in , one has, for all ,

where the last inequality follows from the fact that as by Assumption 4.1. Now the definition of readily implies that

which yields

Hence, the probability in the right-hand side of (15) is zero, which yields .

Appendix E Analysis of GLR-klUCB with Local Changes

e.1 Proof of Theorem 4.2

Our analysis relies on the general regret decomposition (10), with the following appropriate good event.

(16)

where is defined as the -th change detected by the algorithm on arm and is defined as in Assumption 4.2. Using this assumption, one can prove the following.

Lemma

The “bad event” is highly unlikely, as it satisfies .

We now turn our attention to upper bounding terms and in (10).

Upper bound on term (A).

where we introduce the event that all the changes up to the -th have been detected:

(17)

Clearly, and is -measurable. Observe that conditionally to , when holds, is the average of samples that have all mean . Thus, introducing as a sequence of i.i.d. random variables with mean , one can write

where the last but one inequality relies on the concentration inequality given in Lemma 2 of Cappé et al. (2013), and the fact that . Finally, using the law of total expectation yields

(18)

Upper bound on term (B).

Recall that is defined from the statement of Theorem 4.2 as the smallest value that still outperforms arm on the the interval between the and -st change of . We let denote the empirical mean of the first observations of arm made after time . To upper bound Term B, we introduce a sum over all arms and rewrite the sum in as a sum of consecutive intervals . The decomposition using furthermore that on , for all changes, by Assumption 4.2.

The last inequality relies on introducing a sum over and swapping the sums. Conditionally to , when holds, for , is the empirical mean from i.i.d. observations of mean . Therefore, introducing as a sequence of i.i.d. random variables with mean , it follows from the law of total expectation that

As by definition, we can use the same analysis as in the proof of Fact 2 in Appendix A.2 of Cappé et al. (2013) to show that

(19)

The result follows by combining the decomposition (10) with Lemma E.1 and the bounds (18) and (19).

e.2 Proof of Lemma e.1

With the event defined in (17), a simple union bound yields

The final result follows by proving that the terms and are both upper bounded by .

Upper bound on : controlling the false alarms.

Under the bandit algorithm, the change point detector associated to arm is based on (possibly much) less than samples from arm , which makes false alarm even less likely to occur. More precisely, we upper bound term by

with where is an i.i.d. sequence with mean . Indeed, conditionally to , the successive observations of arm arm starting from are i.i.d. with mean . Using Lemma C.1, term is upper bounded by .

Upper bound on term : controlling the delay.

Controlling the detection delay on arm under an adaptive sampling scheme can be tricky. Here we need to leverage the forced exploration (Proposition 4) to be sure we have enough samples to ensure detection: the effect is that delays will be scaled by the exploration parameter .

First, it follows from Proposition 4 that there exists such that where (as the mapping is non-decreasing, is at and its value at is larger than ). Using that , the event further implies that

where denotes the empirical mean of the first observation of arm since the -th restart and the empirical mean that includes observation number to number . Conditionally to , is the empirical mean of i.i.d. replications of mean , whereas is the empirical mean of i.i.d. replications of mean .

Moreover, due to Proposition 4, lies in . Conditionally to , one obtains furthermore using that – which follows from Assumption 4.2 – that

Introducing (resp. ) the empirical mean of i.i.d. observations with mean (resp. ), such that and are independent, it follows that

where we have also used that .

Using Pinsker’s inequality and introducing the gap , one can write

Using Lemma C.2 stated in Appendix C.2 and a union bound, the first term in the right hand side is upper bounded by (as ). For the second term, we use the observation

and finally get

(20)

Define . Using that the mappings and are respectively decreasing and increasing in , one has, for all ,

where the last inequality follows from the fact that as by Assumption 4.2. Now the definition of readily implies that

which yields

Hence, the probability in the right-hand side of (20) is zero, which yields .

Appendix F Time and Memory Costs of GLR-klUCB

As demonstrated, our proposal is empirically efficient in terms of regret, but it is important to also evaluate its cost in terms of both time and memory. Remember that denotes the number of change-points. If we denote the longest duration of a stationary sequence, the worst case is for a stationary problem, and the easiest case is typically for evenly spaced change-points. We begin by reviewing the costs of the algorithms designed for stationary problems and then of other approaches.

For a stationary bandit problem, almost all classical algorithms (i.e., designed for stationary problems) need and use a storage proportional to the number of arms, i.e., , as most of them only need to store a number of pulls and the empirical mean of rewards for each arm. They also have a time complexity at every time step , hence an optimal total time complexity of . In particular, this case includes UCB, Thompson sampling and klUCB.

Most algorithms designed for abruptly changing environments are more costly, both in terms of storage and computation time, as they need more storage and time to test for changes. The oracle algorithm presented in Section 6, combined with any efficient index policy, needs a storage at most as it stores the change-points, and have an optimal time complexity of too.

On the first hand, passively adaptive algorithms should intuitively be more efficient, but as they use a non-constant storage, they are actually as costly as the oracle. For instance SW-UCB uses a storage of , increasing as increases, and similarly for other passively adaptive algorithms. We highlight that to our knowledge, the Discounted Thompson sampling algorithm (DTS) is the only algorithm tailored for abruptly changing problems that is both efficient in terms of regret (see the simulations results, even though it has no theoretical guarantee), and optimal in terms of both computational and storage costs. Indeed, it simply needs a storage proportional to the number of arms, , and a time complexity of for a horizon (see the pseudo-code in Raj and Kalyani (2017)). Note that the discounting scheme in Discounted-UCB (D-UCB) from Kocsis and Szepesvári (2006) requires to store the whole history, and not only empirical rewards of each arm, as after observing a reward, all previous rewards must be multiplied by if that arm was not seen for times. So the storage cannot be simply proportional to , but needs to grows as grows. Therefore, D-UCB costs in memory and in time.

On the other hand, limited memory actively adaptive algorithms, like M-UCB, are even more costly. For instance, M-UCB would have the same cost of , except that Cao et al. introduce a window-size and run their CPD algorithm only using the last observations of each arm. If is constant w.r.t. the horizon , their algorithm has a storage cost bounded by and a running time of , being comparable to the cost of the oracle approach. However in practice, as well as in the theoretical results, the window size should depend on and a prior knowledge of the minimal change size (see Remark 1 in Cao et al. (2019)), and . Hence it makes more sense to consider that M-UCB has a time cost bounded by and a memory costs bounded by , which is better than our proposal but more costly than the oracle or DTS or stationary algorithms.

Time and memory cost of GLR-klUCB.

On the other hand, actively adaptive algorithms are more efficient (when tuned correctly) but at the price of being more costly, both in terms of time and memory. The two algorithms using the CUSUM or GLR tests (as well as PHT), when used with an efficient and low-cost index policy (that is, choosing the arm to play only costs at any time ), are found to be efficient in terms of regret. However, they need to store all past rewards and pulls history, to be able to reset them when the CPD algorithm tells to do, so they have a memory cost of , that is in the worst case (compared to for algorithms designed for the stationary setting). They are also costly in terms of computation time, as at current time , when trying to detect a change with observations of arm (i.e., in Algorithm 1), the CPD algorithm (CUSUM or GLR) costs a time . Indeed, it needs to compute sliding averages for every in an interval of size (i.e., and ) and a test for each which costs a constant time (e.g., computing two and checking for a threshold for our GLR test). So for every , the running time is , if the sliding averages are computed iteratively based on a simple scheme: first, one compute the total average and set and . Then for every successive values of , both the left and right sliding window means can be updated with a single memory access and two computations (i.e., ):

To sum up, at every time step the CPD algorithm needs a time , and at the end, the time complexity of CUSUM-klUCB as well as GLR-klUCB is , which can be up-to , much more costly than for klUCB for instance.

Our proposal GLR-klUCB requires a storage of the order of and a running time of the order of , and the two bounds are validated experimentally, see Table 2.

Empirical measurements of computation times and memory costs.

A theoretical analysis shows that there is a large gap between the costs of stationary or passively adaptive algorithms, and the costs of actively adaptive algorithms, for both computation time and memory consumption. We include here an extensive comparison of memory costs of the different algorithms. For instance on the same experiment as the one used for Table 4, that is problem 1 with , and then with and , and independent runs, in our Python implementation (using the SMPyBandits library, Besson (2018)), we can measure the (mean) real memory cost3 of the different algorithms. The Tables 2 and 3 included below give the mean ( 1 standard-deviation) of real computation time and memory consumption, used by the algorithms. The computation time is normalized by the horizon, to reflect the (mean) time used for each time steps . We also found that our two optimizations described below in F.1 do not reduce the memory, thus we used to speed-up the simulations. The conclusions to draw for these two Tables 2 and 3 are twofold.

Algorithms \Problems
Thompson sampling
DTS
klUCB
Discounted-klUCB
SW-klUCB
Oracle-Restart klUCB
M-klUCB
CUSUM-klUCB
GLR-klUCB
Table 2: Normalized computation time, for each time step , for different horizons.
Algorithms \Problems
Thompson sampling B B B 27 B B B
 DTS B B B B B B
klUCB B B B B B B
Discounted-klUCB KiB B KiB B KiB B
SW-klUCB KiB B KiB KiB KiB KiB
Oracle-Restart klUCB KiB KiB KiB KiB KiB KiB
M-klUCB KiB KiB KiB KiB KiB KiB
CUSUM-klUCB KiB KiB KiB KiB KiB KiB
GLR-klUCB KiB KiB KiB KiB KiB KiB
Table 3: Non normalized memory costs, for the same problem (Pb 1) with different horizons.

First, we verify the results stated above for the time complexity of different algorithms. On the first hand, stationary and passively adaptive algorithms all have a time complexity scaling as , as their normalized computation time is almost constant w.r.t. . We check that TS and DTS are the fastest algorithms, to times faster than klUCB-based algorithms, due to the fact that sampling from a Beta posterior is typically faster than doing a (small) numerical optimization step to compute the in the klUCB indexes. We also check that the passively adaptive algorithms add a non-trivial but constant overhead on the computation times of their based algorithm, e.g., SW-klUCB compared to klUCB, or DTS compared to TS. On the other hand, we also check that actively adaptive algorithms are most costly. M-klUCB normalized computation time is not increasing much when the horizon is doubled, as the window-size was set to a constant w.r.t. the horizon for this experiment. The complexity of GLR-klUCB follows the bound we presented above.

Second, we also verify the results for the memory costs of the different algorithms. Similarly, stationary and passively adaptive algorithms based on a discount factor (D-UCB, DTS) have a memory cost constant w.r.t. the horizon , as stated above, while algorithms based on a sliding-window have a memory cost increasing w.r.t. the horizon . The Oracle-Restart and the actively adaptive algorithms see their memory costs increase similarly. These measurements validate the upper-bound we gave on their memory costs, , as for this problem with evenly-spaced change-points.

f.1 Two ideas of numerical optimization

In order to empirically improve this weakness of our proposal, we suggest two simple optimizations tweaks on GLR-klUCB (see Algorithm 1) to drastically speed-up its computation time.

  1. The first optimization, parametrized by a constant , is the following idea. We can test for statistical changes not at all time steps but only every time steps (i.e., for satisfying ). In practice, instead of sub-sampling for the time , we propose to sub-sample for the number of samples of arm before calling GLR to check for a change on arm , that is, in Algorithm 1. Note that the first heuristic using can be applied to M-UCB as well as CUSUM-UCB and PHT-UCB (and variants using klUCB), with similar speed-up and typically leading to similar consequences on the algorithm’s performance.

  2. The second optimization is in the same spirit, and uses a parameter . When running the GLR test with data , instead of considering every splitting time steps , in the same spirit, we can skip some and test not at all time steps but only every time steps.

The new GLR test is using the stopping time defined in (6), with and . The goal is to speed up the computation time of every call to the GLR test (e.g., choosing , every call should be about times faster), and to speed up the overhead cost of running the tests on top of the index policy (klUCB), by testing for changes less often (e.g., choosing should speed up the all computation by a factor ).

Empirical validation of these optimization tricks.

We consider the problem 1 presented above (Figure 1), with and repetitions, and we give the means ( 1 standard-deviation) of both regret and computation time of GLR-klUCB with Local restarts, for different parameters and , in Table 4 below. The other parameters of GLR-klUCB are chosen as and (from Corollary 4.2). The algorithm analyzed in Section 5 corresponds to .

On the same problem, the Oracle-Restart klUCB obtained a mean regret of for a running time of ms ms, while klUCB obtained a regret of for a time of ms ms. In comparison with the two other efficient approaches, M-klUCB obtained a regret of for a time of ms ms, and CUSUM-klUCB obtained a regret of for a time of s s. This shows that our proposal is very efficient compared to stationary algorithms, and comparable to the state-of-the-art actively adaptive algorithm. Moreover, this shows that two heuristics efficiently speed-up the computation times of GLR-klUCB. Choosing small values, like , can speed-up GLR-klUCB, making it fast enough to be comparable to recent efficient approaches like M-UCB and even comparable to the oracle policy. It is very satisfying to see that the use of these optimizations do not reduce much the regret of GLR-klUCB, as it still outperforms most state-of-the-art algorithms, and significantly reduces the computation time as wanted. With such numerical optimization, GLR-klUCB is not significantly slower than klUCB while being much more efficient for piece-wise stationary problems.

\
\
s s s s s s
s s s s s s s s
s s s s s s s s
s s s s s s
Table 4: Effects of the two optimizations parameters and , on the mean regret (top) and mean computation time (bottom) for GLR-klUCB on a simple problem. Using the optimizations with does not reduce the regret much but speeds up the computations by about a factor .

Appendix G Sensibility analysis of the exploration probability

As demonstrated in our experiments, the choice of for the exploration probability is a good choice for GLR-klUCB to be efficient. The dependency w.r.t. the horizon comes from Corollary 4.2 when is unknown, and we observe in the Table 5 below that the value of does not influence much the performance of our proposal, as long as . Different values of are explored, for problems , and , and we average over independent runs. Other parameters are set to Local restarts, , and to speed-up the experiments. In the three experiments, GLR-klUCB performs close to the Oracle-Restart klUCB, and outperforms all or almost all the other approaches, for all choices of . We observe in Table 5 that the parameter does not have a significative impact on the performance, and that surprisingly, choosing does not reduce the empirical performance of GLR-klUCB, which means that on some problems there is no need of a forced exploration. However, the analysis of GLR-klUCB is based on the forced exploration, and we found that for a larger number of arms, or for problems when the optimal arm change constantly, the forced exploration is required.

Choice of \Problems Problem Problem Problem
Table 5: Mean regret standard-deviation, for different choices of on three problems of horizon , for GLR-klUCB with .

Appendix H Additional Numerical Results

This section includes additional figures and numerical results, not presented in the main text. We describe and illustrate in Figures 2 and 3 two more problems.

Problem 4.

Like problem , it uses arms, change-points and , but the stationary sequences between successive change-points no longer have the same length, as illustrated in Figure 2. Classical (stationary) algorithms such as klUCB can be “tricked” by large enough stationary sequences, as they learn with a large confidence the optimal arm, and then fail to adapt to a new optimal arm after a change-point. We observe below in Table 6 that they can suffer higher regret when the change-points are more spaced out, as this problem starts with a longer stationary sequence of length .

Figure 2: Problem 4: arms with , changes occur on all arms at a time (i.e., ).

Problem 5.

Like problem , this hard problem is inspired from synthetic data obtained from a real-world database of clicks from Yahoo! (see Figure 3 from Liu et al. (2018)). It is harder, with change-points on arms for a longer horizon of . Some arms change at almost every time steps, for a total number of breakpoints , but the optimal arm is almost always the same one (arm , with ). It is a good benchmark to see if the actively adaptive policies do not detect too many changes, as the Oracle-Restart policy suffers higher regret in comparison to klUCB. Means are also bounded in , with small gaps of amplitude ranging from to , as shown in Figure 3.

Figure 3: Problem 5: , , changes occurring on some arms at every time ().

Interpretations.

We include below the figures showing the simulation results for the problems presented in Section 6 and Section H. The results in terms of mean regret are given in Tables 1 and 6, but it is also interesting to observe two plots for each experiments. First, we show the mean regret as a function of time (i.e., for ), for the algorithms considered (Discounted-klUCB is removed as it is always the most inefficient and suffers high regret). Efficient stationary algorithms, like TS and klUCB, typically suffer a linear regret after a change on the optimal arm changes if they had “too” many samples before the change-points (e.g., on Figure 4 and even more on Figures 8 and 9). This illustrates the conjecture that classical algorithms can suffer linear regret even on simple piece-wise stationary problems.

On very simple problems, like problem 1, all the algorithms being designed for piece-wise stationary environments perform similarly, but as soon as the gaps are smaller or there is more changes, we clearly observe that our approach GLR-klUCB can outperform the two other actively adaptive algorithms CUSUM-klUCB and M-klUCB (e.g., on Figure 6), and performs much more than passively adaptive algorithms DTS and SW-klUCB (e.g., on Figure 7). Our approach, with the two options of Local or Global restarts, performs very closely to the oracle for problem .

Finally, in the case of hard problems, like problems and , that have a lot of changes but where the optimal arm barely changes, we verify in Figure 9 that klUCB and TS can outperform the oracle policy. Indeed the oracle policy is suboptimal as it restarts as soon as one arm change but is unaware of the meaningful changes, and stationary policies which quickly identify the best arm will play it most of the times, achieving a smaller regret. We note that, sadly, all actively adaptive policies fail to outperform stationary policies on such hard problems, because they do not observe enough rewards from each arm between two restarts (i.e., the Assumptions 4.1 and 4.2 for our Theorems 4.1 and 4.2 are not satisfied). We can also verify that the two options, Local and Global restart, for GLR-klUCB, give close results, and that the Local option is always better.

We also show the empirical distribution of the regret , on Figure 5. It shows that all algorithms have a rather small variance on their regret, except Thompson Sampling which has a large tail due to its large mean regret on this (easy) non-stationary problems.

Algorithms \Problems Pb () Pb () Pb
Oracle-Restart klUCB
klUCB
SW-klUCB
Discounted-klUCB
Thompson sampling
DTS
M-klUCB
CUSUM-klUCB
GLR-klUCB(Local)
GLR-klUCB(Global)
Table 6: Mean regret std-dev. Problem use arms, and a first long stationary sequence. Problem use , and is much harder with breakpoints and changes.
Figure 4: Mean regret as a function of time, () for problem 1.
Figure 5: Histograms of the distributions of regret () for problem 1.
Figure 6: Mean regret as a function of time, () for problem .
Figure 7: Mean regret as a function of time, () for problem 3.
Figure 8: Mean regret as a function of time, () for problem 4.
Figure 9: Mean regret as a function of time, () for problem 5.

Footnotes

  1. This alternative choice is currently not fully supported by theory, as we found mistakes in the analysis of CUSUM-UCB: Hoeffding’s inequality is used with a random number of observations and a random threshold to obtain Eq. -.
  2. Liu et al. (2018); Cao et al. (2019) both mention that extending their analysis to the use of klUCB should not be too difficult.
  3. We used one core of a Intel Core CPU, GNU/Linux machine running Ubuntu and Python v, with Gb of RAM.

References

  1. Analysis of Thompson sampling for the Multi-Armed Bandit problem. In Conference On Learning Theory, Cited by: §6.
  2. Memory Bandits: Towards the Switching Bandit Problem Best Resolution. In NIPS 2017 - 31st Conference on Neural Information Processing Systems, Cited by: §2.
  3. The Non-Stationary Stochastic Multi-Armed Bandit Problem. International Journal of Data Science and Analytics 3 (4), pp. 267–283. Cited by: §2.
  4. Exp3 with Drift Detection for the Switching Bandit Problem. In IEEE Internation Conference on Data Science and Advanced Analytics (DSAA), pp. 1–7. Cited by: §2.
  5. Finite-time Analysis of the Multi-armed Bandit Problem. Machine Learning 47 (2), pp. 235–256. Cited by: §1.
  6. The Non-Stochastic Multi-Armed Bandit Problem. SIAM journal on computing 32 (1), pp. 48–77. Cited by: §1.
  7. Control charts and stochastic processes. Journal of the Royal Statistical Society. Series B (Methodological), pp. 239–271. Cited by: §3.
  8. Detection of Abrupt Changes: Theory And Application. Vol. 104, Prentice Hall Englewood Cliffs. Cited by: §2, §3.
  9. Stochastic Multi-Armed Bandit Problem with Non-Stationary Rewards. In Advances in Neural Information Processing Systems, pp. 199–207. Cited by: §2.
  10. SMPyBandits: an Open-Source Research Framework for Single and Multi-Players Multi-Arms Bandits (MAB) Algorithms in Python. Note: Code at \urlhttps://GitHub.com/SMPyBandits/SMPyBandits/, documentation at \urlhttps://SMPyBandits.GitHub.io/ Cited by: Appendix F, §7.
  11. Multi-Armed Bandit Learning in IoT Networks: Learning helps even in non-stationary settings. In 12th EAI Conference on Cognitive Radio Oriented Wireless Network and Communication, CROWNCOM Proceedings. Cited by: §1.
  12. Concentration Inequalities: A Nonasymptotic Theory of Independence. Oxford university press. Cited by: §C.2.
  13. Nearly Optimal Adaptive Procedure for Piecewise-Stationary Bandit: a Change-Point Detection Approach. In AISTATS, Okinawa, Japan. Cited by: Appendix F, §1, §2, §4.3, §4, §5, §6, §6, footnote 2.
  14. Kullback-Leibler Upper Confidence Bounds For Optimal Sequential Allocation. Annals of Statistics 41(3), pp. 1516–1541. Cited by: §D.1, §D.1, §E.1, §E.1, §1, §3, §4, §5.
  15. The KL-UCB Algorithm for Bounded Stochastic Bandits and Beyond. In Conference on Learning Theory (COLT), pp. 359–376. Cited by: §6.
  16. KL-UCB-switch: optimal regret bounds for stochastic bandits from both a distribution-dependent and a distribution-free viewpoints. Note: working paper or preprint External Links: Link Cited by: §1.
  17. On Upper-Confidence Bound Policies For Switching Bandit Problems. In Algorithmic Learning Theory (ALT), pp. 174–188. Cited by: §2, §2, §6.
  18. Multi-Armed Bandit, Dynamic Environments and Meta-Bandits. In NIPS 2006 Workshop, Online Trading Between Exploration And Exploitation, Cited by: §2.
  19. Mixture Martingales Revisited with Applications to Sequential Tests and Confidence Intervals. Note: arXiv preprint arXiv:1811.11419 External Links: Link Cited by: §C.1, §C.1, §C.1, §3.2.
  20. Thompson Sampling: an Asymptotically Optimal Finite-Time Analysis. In Algorithmic Learning Theory (ALT), pp. 199–213. Cited by: §6.
  21. Node-based optimization of LoRa transmissions with Multi-Armed Bandit algorithms. In ICT 2018 - 25th International Conference on Telecommunications, Saint Malo, France. Cited by: §1.
  22. Discounted UCB. In 2nd PASCAL Challenges Workshop, Cited by: Appendix F, §1, §2, §2, §6.
  23. Asymptotically Efficient Adaptive Allocation Rules. Advances in Applied Mathematics 6 (1), pp. 4–22. Cited by: §1.
  24. Sequential change-point detection when the pre-and post-change parameters are unknown. Sequential Analysis 29 (2), pp. 162–175. Cited by: §3.1.
  25. Bandit Algorithms. Note: Draft of Friday 18th January, 2019, Revision: 1699 External Links: Link Cited by: §1.
  26. A Contextual-Bandit Approach to Personalized News Article Recommendation. In International Conference on World Wide Web, Cited by: §1.
  27. A Change-Detection based Framework for Piecewise-stationary Multi-Armed Bandit Problem. In The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI 2018), Cited by: Appendix H, §1, §2, §5, §6, footnote 2.
  28. Multi-Armed Bandits with Application to 5G Small Cells. IEEE Wireless Communications 23 (3), pp. 64–73. Cited by: §1.
  29. Sequential change-point detection: Laplace concentration of scan statistics and non-asymptotic delay bounds. In Algorithmic Learning Theory (ALT), Cited by: §3.1, §3.2, §3.3, §3, §5.
  30. Thompson Sampling in Switching Environments with Bayesian Online Change Detection. In Artificial Intelligence and Statistics, pp. 442–450. Cited by: §2, §2.
  31. Taming Non-Stationary Bandits: a Bayesian Approach. Note: arXiv preprint arXiv:1707.09727 External Links: Link Cited by: Appendix F, §2, §6.
  32. Some Aspects of the Sequential Design of Experiments. Bulletin of the American Mathematical Society 58 (5), pp. 527–535. Cited by: §1.
  33. Using the Generalized Likelihood Ratio Statistic for Sequential Detection of a Change Point. The Annals of Statistics, pp. 255–271. Cited by: §3.
  34. On the Likelihood that One Unknown Probability Exceeds Another in View of the Evidence of Two Samples. Biometrika 25. Cited by: §1.
  35. On Abruptly-Changing and Slowly-Varying Multiarmed Bandit Problems. Note: arXiv preprint arXiv:1802.08380 External Links: Link Cited by: §2, §7.
  36. Piecewise-Stationary Bandit Problems with Side Observations. In Proceedings of the International Conference on Machine Learning (ICML), pp. 1177–1184. Cited by: §1, §2, §2.