The Generalized Likelihood Ratio Test meets klUCB: an Improved Algorithm for Piece-Wise Non-Stationary Bandits

Lilian Besson1 and Emilie Kaufmann2
1 Lilian.Besson@CentraleSupelec.fr
CentraleSupélec (campus of Rennes), IETR, SCEE Team,
Avenue de la Boulaie – CS , F- Cesson-Sévigné, France
2 Emilie.Kaufmann@univ-lille.fr
CNRS & Université de Lille, Inria SequeL team
UMR 9189 – CRIStAL, F- Lille, France



This document was generated with the open-source software Engrafo. It is nothing but experimental, please refer to the PDF version of our article instead.

Abstract

We propose a new algorithm for the piece-wise i.i.d. non-stationary bandit problem with bounded rewards. Our proposal, GLR-klUCB, combines an efficient bandit algorithm, klUCB, with an efficient, parameter-free, change-point detector, the Bernoulli Generalized Likelihood Ratio Test, for which we provide new theoretical guarantees of independent interest. We analyze two variants of our strategy, based on local restarts and global restarts, and show that their regret is upper-bounded by if the number of change-points is unknown, and by if is known. This improves the state-of-the-art bounds, as our algorithm needs no tuning based on knowledge of the problem complexity other than . We present numerical experiments showing that GLR-klUCB outperforms passively and actively adaptive algorithms from the literature, and highlight the benefit of using local restarts.

Keywords: Multi-Armed Bandits; Change Point Detection; Non-Stationary Bandits

1 Introduction

Multi-Armed Bandit (MAB) problems form a well-studied class of sequential decision making problems, in which an agent repeatedly chooses an action or “arm” –in reference to the arm of a one-armed bandit– among a set of arms. In the most standard version of the stochastic bandit model, each arm is associated with an i.i.d. sequence of rewards that follow some distribution of mean . Upon selecting arm , the agent receives the reward associated to the chosen arm, and her goal is to adopt a sequential sampling strategy that maximize the expected sum of these rewards. This is equivalent to minimizing the regret, defined as the difference between the total reward of the oracle strategy always selecting the arm with largest mean, , and that of our strategy: .

Regret minimization in stochastic bandits has been extensively studied since the works of [H. Robbins (1952)] and [T. L. Lai and H. Robbins (1985)], and several algorithms with a problem-dependent regret upper bound have been proposed (see, e.g., [T. Lattimore and C. Szepesvári (2019)] for a survey). Among those, the klUCB algorithm (Cappé et al., 2013) has been shown to be asymptotically optimal for Bernoulli distributions (in that it exactly matches the lower bound given by Lai and Robbins (1985)) and can also be employed when the rewards are assumed to be bounded in . Problem-independent upper bounds of the form (with no hidden constant depending on the arms distributions) have also been established for stochastic algorithms, like MOSS, or klUCB-Switch by Garivier et al. (2018), while klUCB is known to enjoy a sub-optimal problem-independent regret.

Stochastic bandits were historically introduced as a simple model for clinical trials, where arms correspond to some treatments with unknown efficacy (Thompson, 1933). More recently, MAB models have been proved useful for different applications, like cognitive radio, where arms can model the vacancy of radio channels, or parameters of a dynamically configurable radio hardware (Maghsudi and Hossain, 2016; Bonnefoi et al., 2017; Kerkouche et al., 2018). Another application is the design of recommender systems, where arms model the popularity of different items (e.g., news recommendation, Li et al. (2010)).

For both cognitive radio and recommender systems, the assumption that the arms distribution do not evolve over time may be a big limitation. Indeed, in cognitive radio new devices can enter or leave the network, which impacts the availability of the radio channel they use to communicate; whereas in online recommendation, the popularity of items is also subject to trends. Hence, there has been some interest on how to take those non-stationary aspects into account within a multi-armed bandit model.

A first possibility to cope with non-stationary is to model the decision making problem as an adversarial bandit problem (Auer et al., 2002b). Under this model, rewards are completely arbitrary and are not assumed to follow any probability distribution. For adversarial environments, the pseudo-regret, which compares the accumulated reward of a given strategy with that of the best fixed-arm policy, is often studied. The pseudo regret of the EXP3 algorithm has been shown to be , which matches the lower bound given by Auer et al. (2002b). However, this model is a bit too general for the considered applications, where reward distributions do not necessarily vary at every round. For these reasons, an intermediate model, called the piece-wise stationary MAB, has been introduced by Kocsis and Szepesvári (2006) and Yu and Mannor (2009). In this model, described in full details in Section 2, the (random) reward of arm at round has some mean , that is constant on intervals between two breakpoints, and the regret is measured with respect to the current best arm .

In this paper, we propose a new algorithm for the piece-wise stationary bandit problem with bounded rewards, called GLR-klUCB. Like previous approaches – CUSUM (Liu et al., 2018) and M-UCB (Cao et al., 2019) – our algorithm relies on combining a standard multi-armed bandit algorithm with a change-point detector. For the bandit component, we propose the use of the klUCB algorithm, that is known to outperform UCB1 (Auer et al., 2002a) used in previous works. For the change-point detector, we propose the Bernoulli Generalized Likelihood Ratio Test (GLRT), for which we provide new non-asymptotic properties that are of independent interest. This choice is particularly appealing because unlike previous approaches, the Bernoulli GLRT is parameter-free: it does not need the tuning of a window size ( in M-UCB), or the knowledge of a lower bound on the magnitude of the smallest change ( in CUSUM).

In this work we jointly investigate, both in theory and in practice, two possible combinations of the bandit algorithm with a change-point detector, namely the use of local restarts (resetting the history of an arm each time a change-point is detected on that arm) and global restarts (resetting the history of all arms once a change-point is detected on one of them). We provide a regret upper bound scaling in for both versions of GLR-klUCB, matching existing results (when is known). Our numerical simulations reveal that using local restart leads to better empirical performance, and show that our approach often outperforms existing competitors.

The article is structured as follows. We introduce the model and review related works in Section 2. In Section 3, we study the Generalized Likelihood Ratio test (GLRT) as a Change-Point Detector (CPD) algorithm. We introduce the two variants of GLR-klUCB algorithm in Section 4, where we also present upper bound on the regret of each variant. The unified regret analysis for these two algorithms is sketched in Section 5. Numerical experiments are presented in Section 6, with more details in the Appendix.

2 The Piece-Wise Stationary Bandit Setup and Related Works

A piece-wise stationary bandit model is characterized by a set of arms. A (random) stream of rewards is associated to each arm . We assume that the rewards are bounded, and without loss of generality we assume that . We denote by the mean reward of arm at round . At each round , a decision maker has to select an arm , based on past observation and receives the corresponding reward . At time , we denote by an arm with maximal expected reward, i.e., , called an optimal arm (possibly not unique).

A policy chooses the next arm to play based on the sequence of past plays and obtained rewards. The performance of is measured by its (piece-wise stationary) regret, the difference between the expected reward obtained by an oracle policy playing an optimal arm at time , and that of the policy :

(1)

In the piece-wise i.i.d. model, we furthermore assume that there is a (relatively small) number of breakpoints, denoted by . We define the -th breakpoint by . Hence for , the rewards associated to each arms are i.i.d.. Note than when a breakpoint occurs, we do not assume that all the arms means change, but that there exists an arm whose mean has changed. Depending on the application, many scenario can be meaningful: changes occurring on all arms simultaneously (due to some exogenous event), or only a few arms change at each breakpoint. Introducing the number of change-points on arm , defined as , it clearly holds that , but there can be an arbitrary difference between these two quantities for some arms. Letting be the total number of change-points on the arms, one can have .

The piece-wise stationary bandit model can be viewed as an interpolation between stationary and adversarial models, as the stationary model corresponds to , while the adversarial model can be considered as a special (worst) case, with . However, analyzing an algorithm for the piece-wise stationary bandit model requires to assume a small number of changes, typically .

Related work.

The piece-wise stationary bandit model was first studied by Kocsis and Szepesvári (2006); Yu and Mannor (2009); Garivier and Moulines (2011). It is also known as switching (Mellor and Shapiro, 2013) or abruptly changing stationary (Wei and Srivastava, 2018) environment. To our knowledge, all the previous approaches combine a standard bandit algorithm, like UCB, Thompson Sampling or EXP3, with a strategy to account for changes in the arms distributions. This strategy often consists in forgetting old rewards, to efficiently focus on the most recent ones, more likely to be similar to future rewards. We make the distinction between passively and actively adaptive strategies.

The first proposed mechanisms to forget the past consist in either discounting rewards (at each round, when getting a new reward on an arm, past rewards are multiplied by if that arm was not seen since times, for a discount factor ), or using a sliding window (only the rewards gathered in the last observations of an arm are taken into account, for a window size ). Those strategies are passively adaptive as the discount factor or the window size are fixed, and can be tuned as a function of and to achieve a certain regret bound. Discounted UCB (D-UCB) was proposed by Kocsis and Szepesvári (2006) and analyzed by Garivier and Moulines (2011), who prove a regret bound, if . The same authors proposed the Sliding-Window UCB (SW-UCB) and prove a regret bound, if .

More recently, Raj and Kalyani (2017) proposed the Discounted Thompson Sampling (DTS) algorithm, which performs well in practice with . However, no theoretical guarantees are given for this strategy, and our experiments did not really confirm the robustness to . The RExp3 algorithm (Besbes et al., 2014) can also be qualified as passively adaptive: it is based on (non-adaptive) restarts of the EXP3 algorithm. Note that this algorithm is introduced for a different setting, where the quantity of interest is not but a quantity called the total variational budget (satisfying with the minimum magnitude of a change-point). A regret bound is proved, which is weaker than existing results in our setting. Hence we do no include this algorithm in our experiments.

The first actively adaptive strategy is Windowed-Mean Shift (Yu and Mannor, 2009), which combines any bandit policy with a change point detector which performs adaptive restarts of the bandit algorithm. However, this approach is not applicable to our setting as it takes into account side observations. Another line of research on actively adaptive algorithms uses a Bayesian point of view. A Bayesian Change-Point Detection (CPD) algorithm is combined with Thompson Sampling by Mellor and Shapiro (2013), and more recently in the Memory Bandit algorithm of Alami and Féraud (2017). Both algorithms do not have theoretical guarantees and their implementation is very costly, hence we do not include them in our experiments. Our closest competitors rather use frequentist CPD algorithms (see, e.g. Basseville and Nikiforov (1993)) combined with a bandit algorithm. The first algorithm of this flavor, Adapt-EVE algorithm (Hartland et al., 2006) uses a Page-Hinkley test and the UCB policy, but no theoretical guarantee are given. EXP3.R (Allesiardo and Féraud, 2015; Allesiardo et al., 2017) combines a CPD with EXP3, and the history of all arms are reset as soon as a a sub-optimal arm is detecting to become optimal and it achieve a regret. This is weaker than the regret achieved by two recent algorithms, CUSUM-UCB (Liu et al., 2018) and Monitored UCB (M-UCB, Cao et al. (2019)).

is based on a rather complicated two-sided CUSUM test, that uses the first samples from one arm to compute an initial average, and then detects whether a drift of size larger than occurred from this value by checking whether a random walk based on the remaining observations crosses a threshold . Thus it requires the tuning of three parameters, , and . CUSUM-UCB performs local restarts using this test, to reset the history of one arm for which the test detects a change. M-UCB uses a much simpler test, based on the most recent observations from an arm: a change is detected if the absolute difference between the empirical means of the first and second halves of those observations exceeds a threshold . So it requires the tuning of two parameters, and . M-UCB performs global restarts using this test, to reset the history of all arms whenever the test detects a change on one of the arms. Compared to CUSUM-UCB, note that M-UCB is numerically much simpler as it only uses a bounded memory, of order for arms.

Advantages of our approach. CUSUM-UCB and M-UCB are both analyzed under some reasonable assumptions on the problem parameters –the means – mostly saying that the breakpoints are sufficiently far away from each other. However, the proposed guarantees only hold for parameters tuned using some prior knowledge of the means. Indeed, while in both cases the threshold can be set as a function of the horizon and the number of breakpoints (also needed by previous approaches to obtain the best possible bounds), the parameter for CUSUM and for M-UCB require the knowledge of the smallest magnitude of a change-point. In this paper, we propose the first algorithm that does not require this knowledge, and still attains a regret. Moreover we propose the first comparison of the use of local and global restarts within an adaptive algorithm, by studying two variants of our algorithm. This study is supported by both theoretical and empirical results. Finally, on the practical side, while we can note that the proposed GLR test is more complex to implement than the test used by M-UCB, we propose two heuristics to speed it up while not losing much in terms of regret.

3 The Bernoulli GLR Change Point Detector

Sequential change-point detection has been extensively studied in the statistical community (see, e.g., Basseville and Nikiforov (1993) for a survey). In this article, we are interested in detecting changes on the mean of a probability distribution with bounded support. Assuming that we collect independent samples all from some distribution supported in . We want to discriminate between two possible scenarios: all the samples come from distributions that have a common mean , or there exists a change-point such that have some mean and have a different mean . A sequential change-point detector is a stopping time with respect to the filtration such that means that we reject the hypothesis .

Generalized Likelihood Ratio tests date back to the seminal work of Barnard (1959) and were for instance studied for change-point detection by Siegmund and Venkatraman (1995). Exploiting the fact that bounded distribution are -sub Gaussian (i.e., their moment generating function is dominated by that of a Gaussian distribution with the same mean and a variance ), the (Gaussian) GLRT, recently studied in depth by Maillard (2019), can be use for our problem. We propose instead to exploit the fact that bounded distributions are also dominated by Bernoulli distributions. We call a sub-Bernoulli distribution any distribution that satisfies with and is the log moment generating function of a Bernoulli distribution with mean . Lemma 1 of Cappé et al. (2013) establishes that any bounded distribution supported in is a sub-Bernoulli distribution.

3.1 Presentation of the test

If the samples were all drawn from a Bernoulli distribution, our change-point detection problem would reduce to a parametric sequential test of against the alternative . The Generalized Likelihood Ratio statistic for this test is defined by

where and denote the likelihoods of the first observations under a model in and . High values of this statistic tend to indicate rejection of . Using the form of the likelihood for Bernoulli distribution, this statistic can be written with the binary relative entropy ,

(2)

Indeed, one can show that , where for , denotes the average of the observations collected between the instants and . This motivates the definition of the Bernoulli GLR change point detector.

Definition

The Bernoulli GLR change point detector with threshold function is

(3)

Asymptotic properties of the GLR for change-point detection have been studied by Lai and Xing (2010) for Bernoulli distributions and more generally for one-parameter exponential families, for which the GLR test is defined as in (3) but with replaced by the Kullback-Leibler divergence between two elements in that exponential family that have mean and . For example, the Gaussian GLR studied by Maillard (2019) corresponds to (3) with when the variance is set to , and non-asymptotic properties of this test are given for any -subGaussian samples.

In the next section, we provide new non-asymptotic results about the Bernoulli GLR test under the assumption that the samples come from a sub-Bernoulli distribution, which holds for any distribution supported in . Note that Pinsker’s inequality gives that , hence the Bernoulli GLR may stop earlier that the Gaussian GLR based on the quadratic divergence .

3.2 Properties of the Bernoulli GLR

In Lemma 3.2 below, we propose a choice of the threshold function under which the probability that there exists a false alarm under i.i.d. data is small. To define , we need to introduce the function ,

(4)

where for we define and its inverse . And for any , if and otherwise. The function is easy to compute numerically. Its use for the construction of concentration inequalities that are uniform in time is detailed in Kaufmann and Koolen (2018), where tight upper bound on the function are also given: for and when is large. The proof of Lemma 3.2, that actually holds for any sub-Bernoulli distribution, is given in Appendix C.1.

Lemma

Assume that there exists such that and that for all . Then the Bernoulli GLR test satisfies with the threshold function

(5)

Another key feature of a change-point detector is its detection delay under a model in which a change from to occurs at time . We already observed that from Pinsker’s inequality, the Bernoulli GLR stops earlier than a Gaussian GLR. Hence, one can leverage some techniques from Maillard (2019) to upper bound the detection delay of the Bernoulli GLR. Letting , one can essentially establish that for larger than (i.e., enough samples before the change), the delay can be of the same magnitude (i.e., enough samples after the change). In our bandit analysis to follow, the detection delay will be crucially used to control the probability of the good event (in Lemma D.1 and E.1).

3.3 Practical considerations

Lemma 3.2 provides the first control of false alarm for the Bernoulli GLR employed for bounded data. However, the threshold (5) is not fully explicit as the function can only be computed numerically. Note that for sub-Gaussian distributions, results from Maillard (2019) show that the smaller and more explicit threshold , can be used to prove an upper bound of for the false alarm probability of the GLR, with quadratic divergence . For the Bernoulli GLR, numerical simulations suggest that the threshold (5) is a bit conservative, and in practice we recommend to keep only the leading term and use .

Also note that, as any test based on scan-statistics, the GLR can be costly to implement as at every time step, it considers all previous time steps as a possible positions for a change-point. Thus, in practice the following adaptation may be interesting, based on down-sampling the possible time steps:

(6)

for subsets and . Following the proof of Lemma 3.2, we can easily see that this variant enjoys the exact same false-alarm control. However, the detection delay may be slightly increased. In Appendix F.1 we show that using these practical speedups has little impact on the regret.

4 The GLR-klUCB Algorithm

Our proposed algorithm, GLR-klUCB, combines a bandit algorithm with a change-point detector running on each arm. It also needs a third ingredient, some forced exploration parameterized by to ensure each arm is sampled enough and changes can also be detected on arms currently under-sampled by the bandit algorithm. GLR-klUCB combines the klUCB algorithm (Cappé et al., 2013), known to be optimal for Bernoulli bandits, with the Bernoulli GLR change-point detector introduced in Section 3. This algorithm, formally stated as Algorithm 1, can be used in any bandit model with bounded rewards, and is expected to be very efficient for Bernoulli distributions, which are relevant for practical applications.

0:  Problem parameters: , .
0:  Algorithm parameters: exploration probability , confidence level .
0:  Option: Local or Global restart.
1:  Initialization: , and
2:  for all  do
3:     if  and  then
4:         . (forced exploration)
5:     else
6:          as defined in (7)
7:     end if
8:     Play arm and receive the reward : .
9:     if  = True then
10:         if Global restart then
11:             and .  (global restart)
12:         else
13:             and . (local restart)
14:         end if
15:     end if
16:  end for
Algorithm 1 GLR-klUCB, with Local or Global restarts

The GLR-klUCB algorithm can be viewed as a klUCB algorithm allowing for some restarts on the different arms. A restart happens when the Bernoulli GLR change-point detector detects a change on the arm that has been played (line ). To be fully specific, if and only if

with defined in (5), or , as recommended in practice, see Section 3.3. We define the (klUCB-like) index used by our algorithm, by denoting the last restart that happened for arm before time , the number of selections of arm , and their empirical mean (if ). With the exploration function (if else ), the index is defined as

(7)

In this work, we simultaneously investigate two possible behavior: global restart (reset the history of all arms once a change was detected on one of them, line ), and local restart (reset only the history of the arm on which a change was detected, line ), which are the two different options in Algorithm 1. Under local restart, in the general case the times are not equal for all arms, hence the index policy associated to (7) is not a standard UCB algorithm, as each index uses a different exploration rate. One can highlight that in the CUSUM-UCB algorithm, which is the only existing algorithm based on local restart, the UCB index are defined differently1: is replaced by with .

The forced exploration scheme used in GLR-klUCB (lines -) generalizes the deterministic exploration scheme proposed for M-UCB by (Cao et al., 2019), whereas CUSUM-UCB performs randomized exploration. A consequence of this forced exploration is given in Proposition 4 (proved in Appendix B).

Proposition

For every pair of instants between two restarts on arm (i.e., for a , one has ) it holds that .

4.1 Results for GLR-klUCB using Global Changes

Recall that denotes the position of the -th break-point and let be the mean of arm on the segment between the and -th breakpoint: . We also introduce and the largest gap at break-point as .

Assumption 4.1 below is easy to interpret and standard in non-stationary bandits. It requires that the distance between two consecutive breakpoints is large enough: how large depends on the magnitude of the largest change that happen at those two breakpoints. Under this assumption, we provide in Theorem 4.1 a finite time problem-dependent regret upper bound. It features the parameters and , the KL-divergence terms expressing the hardness of the (stationary) MAB problem between two breakpoints, and the terms expressing the hardness of the change-point detection problem.

Assumption

Define Then we assume that for all , .

Theorem

For and for which Assumption 4.1 is satisfied, the regret of GLR-klUCB with parameters and based on Global Restart satisfies

Corollary

For “easy” problems satisfying the corresponding Assumption 4.1, let denote the smallest value of a sub-optimality gap on one of the stationary segments, and be the smallest magnitude of any change point on any arm.

  1. Choosing , gives ,

  2. Choosing , gives .

4.2 Results for GLR-klUCB using Local Changes

A few new notation are needed to state a regret bound for GLR-klUCB using local changes. We let denote the position of the -th change point for arm : , with the convention , and let be the -th value for the mean of arm , such that . We also introduce the gap .

Assumption 4.2 requires that any two consecutive change-points on a given arm are sufficiently spaced (relatively to the magnitude of those two change-points). Under that assumption, Theorem 4.2 provides a regret upper bound that scales with similar quantities as that of Theorem 4.1, except that the number of breakpoints is replaced with the total number of change points .

Assumption

Define We assume that for all arm and all , .

Theorem

For and for which Assumption 4.2 is satisfied, the regret of GLR-klUCB with parameters and based on Local Restart satisfies

where .

Corollary

For “easy” problems satisfying the corresponding Assumption 4.2, with and defined as in Corollary 4.1, the following holds.

  1. Choosing , gives ,

  2. Choosing , gives .

4.3 Interpretation

Theorems 4.1 and 4.2 both show that there exists a tuning of and as a function and and the number of changes such that the regret is of order and respectively, where the notations ignore the gap terms. For very particular instances such that , i.e., at each break-point only one arm changes (e.g., problem in Section 6), the theory advocates the use of local changes. Indeed, while the regret guarantees obtained are similar, those obtained for local changes hold for a wider variety of problems as Assumption 4.2 is less stringent than 4.1. Besides those specific instances, our results are essentially worse for local than global changes. However, we only obtain regret upper bounds – thus providing a theoretical safety net for both variants of our algorithm, and the practical story is different, as discussed in Section 6. We find that GLR-klUCB performs better with local restarts.

One can note that with the tuning of and prescribed by Corollaries 4.1 and 4.2, our regret bounds hold for problem instances for which two consecutive breakpoints (or change-points on an arm) are separated by more than time steps. Hence those guarantees are valid on “easy” problem instances only, with few changes of a large magnitude (e.g., not for problems or ). However, this does not prevent our algorithms from performing well on more realistic instances, and numerical experiments support this claim. Note that M-UCB (Cao et al., 2019) is also analyzed for the same type of unrealistic assumptions, while its practical performance is illustrated beyond those.

5 A Unified Regret Analysis

In this section, we sketch a unified proof for Theorem 4.1 and 4.2, whose detailed proofs can be found in Appendix D and E respectively. We emphasize that our approach is significantly different from those proposed by Cao et al. (2019) for M-UCB and by Liu et al. (2018) for CUSUM-UCB.

Recall that the regret is defined as . Introducing the (deterministic) set of time steps at which the forced exploration is performed before time (see lines - in Algorithm 1), one can write, using notably that , due to the bounded rewards,

Introducing some good event to be specified in each case, one can write

Each analysis requires to define an appropriate good event, stating that some change-points are detected within a reasonable delay. Each regret bound then follows from upper bounds on term (A), term (B), and on the failure probability . To control (A) and (B), we split the sum over consecutive segments for global changes and for each arm for local changes, and use elements from the analysis of klUCB of Cappé et al. (2013).

The tricky part of each proof, which crucially exploits Assumption 4.1 or 4.2, is actually to obtain an upper bound on . For example for local changes (Theorem 4.2), the good event is defined as

where is defined as the -th change detected by the algorithm on arm and is defined in Assumption 4.2. Introducing the event that all the changes up to the -th have been detected, a union bound yields the following decomposition:

Term (a) is related to the control of probability of false alarm, which is given by Lemma 3.2 for a change-point detector run in isolation. Observe that under the bandit algorithm, the change point detector associated to arm is based on (possibly much) less than samples from arm , which makes false alarm even less likely to occur. Hence, it is easy to show that .

Term (b) is related to the control of the detection delay, which is more tricky to obtain under the GLR-klUCB adaptive sampling scheme, when compared to a result like Theorem 6 in Maillard (2019) for the change-point detector run in isolation. More precisely, we need to leverage the forced exploration (Proposition 4) to be sure we have enough samples for detection. This explains why delays defined in Assumption 4.2 are scaled by . Using some elementary calculus and a concentration inequality given in Lemma C.2, we can finally prove that . Finally, the “bad event” is unlikely:

6 Experimental Results

In this section we report results of numerical simulations performed on synthetic data to compare the performance of GLR-klUCB against other state-of-the-art approaches, on some piece-wise stationary bandit problems. For simplicity, we restrict to rewards generated from Bernoulli distributions, even if GLR-klUCB can be applied to any bounded distributions.

Algorithms and parameters tuning.

We include in our study two algorithms designed for the classical MAB, klUCB (Garivier and Cappé, 2011) and Thompson sampling (Agrawal and Goyal, 2012; Kaufmann et al., 2012), as well as an “oracle” version of klUCB, that we call Oracle-Restart. This algorithm knows the exact locations of the breakpoints, and restarts klUCB at those locations (without any delay).

Then, we compare our algorithms to several competitor designed for a piece-wise stationary model. For a fair comparison, all algorithms that use UCB as a sub-routine were adapted to use klUCB instead, which yields better performance2. Moreover, all the algorithms are tuned as described in the corresponding paper, using in particular the knowledge of the number of breakpoints and the horizon . We first include three passively adaptive algorithms: Discounted klUCB (D-klUCB, Kocsis and Szepesvári (2006)), with discount factor ; Sliding-Window klUCB (SW-klUCB, Garivier and Moulines (2011)) using window-size and Discounted Thompson sampling (DTS, Raj and Kalyani (2017)) with discount factor . For this last algorithm, the discount factor suggested by the authors was performing significantly worse on our problem instances.

Our main goal is to compare against actively adaptive algorithms. We include CUSUM-klUCB (Liu et al., 2018), tuned with and for easy problems () and for hard problems (), and with , , as suggested in the paper. Finally, we include M-klUCB (Cao et al., 2019), tuned with , based on a prior knowledge of the problems as the formula using given in the paper is too large for small horizons (on all our problem instances), a threshold and as suggested by Remark 4 in the paper.

For GLR-klUCB, we explore the two different options with Local and Global restarts, using respectively , and , from Corollaries 4.1 and 4.2. The constant is set to (we show in Appendix G a certain robustness, with similar regret as soon as ). To speed up our simulations, two optimizations are used, with , and CUSUM also uses the first trick with (see Appendix F.1 for more details).

Results.

We report results obtained on three different piece-wise stationary bandit problems, illustrated in Figure 1 in Appendix A, and described in more details below. Additional results on two more problems can be found in Appendix H, where additional regret curves can be found for all problems. For each experiment, the regret was estimated using independent runs.

Problem . There are arms changing times until . The arm means are shown in Fig. 1. Note that changes happen on only one arm (i.e., ), and the optimal arm changes once at , with a large gap .

Problem . (see Fig. 1). This problem is close to Problem 1, with a minimum optimality gap of . However, all arms change at every breakpoint (i.e., ), with identical gap . The first optimal arm decreases at every change ( with ), and one arm stays the worst ( with ).

Problem . (see Fig. 1) This problem is harder, with , and . Most arms change at almost every time steps, and means are bounded in . The gaps are much smaller than for the first problems, with amplitudes ranging from to . Note that the assumptions of our regret upper bounds are violated, as well as the assumptions for the analysis of M-UCB and CUSUM-UCB. This problem is inspired from Figure 3 of Cao et al. (2019), where the synthetic data was obtained from manipulations on a real-world database of clicks from Yahoo!.

Table 1 shows the final regret obtained for each algorithm. Results highlighted in bold show the best non-oracle algorithm for each experiments, with our proposal being the best non-oracle strategy for problems and . Thompson sampling and klUCB are efficient, and better than Discounted-klUCB which is very inefficient. DTS and SW-klUCB can sometimes be more efficient than their stationary counterparts, but perform worse than the Oracle and most actively adaptive algorithms. M-klUCB and CUSUM-klUCB outperform the previous algorithms, but GLR-klUCB is often better. On these problems, our proposal with Local restarts is always more efficient than with Global restarts. Note that on problem 2, all means change at every breakpoint, hence one could expect global restart to be more efficient, yet our experiments show the superiority of local restarts on every instances.

Algorithms \Problems Pb Pb Pb
Oracle-Restart klUCB
klUCB
Discounted-klUCB
SW-klUCB
Thompson sampling
DTS
M-klUCB
CUSUM-klUCB
GLR-klUCB(Local)
GLR-klUCB(Global)
Table 1: Mean regret std-dev, for different algorithms on problems , (with ) and ().

One can note that the best non-oracle strategies are actively adaptive, thus our experiments confirm that an efficient bandit algorithm (e.g., klUCB) combined with an efficient change point detector (e.g., GLR) provides efficient strategies for the piece-wise stationary model.

7 Conclusion

We proposed a new algorithm for the piece-wise stationary bandit problem, GLR-klUCB, which combines the klUCB algorithm with the Bernoulli GLR change-point detector. This actively adaptive method attains state-of-the-art regret upper-bounds when tuned with a prior knowledge of the number of changes , but without any other prior knowledge on the problem, unlike CUSUM-UCB and M-UCB that require to know a lower bound on the smallest magnitude of a change. We also gave numerical evidence of the efficiency of our proposal.

We believe that our new proof technique could be used to analyze GLR-klUCB under less stringent assumptions than the one made in this paper (and in previous work), that would require only a few “meaningful” changes to be detected. This interesting research direction is left for future work, but the hope is that the regret would be expressed in term of this number of meaningful changes instead of . We shall also investigate whether actively adaptive approaches can attain a regret upper-bound without the knowledge of . Finally, we would like to study in the future possible extension of our approach to the slowly varying model (Wei and Srivastava, 2018).

Acknowledgment

Thanks to Odalric-Ambrym Maillard at Inria Lille for useful discussions, and thanks to Christophe Moy at University Rennes 1. Open Source: the simulation code used for the experiments is using Python 3. It is open-sourced at GitHub.com/SMPyBandits/SMPyBandits and fully documented at SMPyBandits.GitHub.io, for more details see the companion article (Besson, 2018). The page SMPyBandits.GitHub.io/NonStationaryBandits.html explains how to reproduce the experiments used for this article.

Appendix A Illustrations of our Benchmark

(a) Problem : arms with , and changes occur on only one arm at a time (i.e., ).
(b) Problem : arms with , and changes occur on all arms (i.e., ).
(c) Problem : arms with , changes occur on most arms at a time ().
Figure 1: History of means of arms for three problems, for and .

Appendix B Proof of Proposition 4

We consider one arm , and when the GLR-klUCB algorithm is running, we consider two time steps , chosen between two restart times for that arm . Lines - state that if (see Algorithm 1 for details). Thus we have