Table of Contents¶
1 Requirements and helper functions
1.1 Requirements
1.2 Mathematical notations for stationary problems
1.3 Generating fake stationary data
1.4 Mathematical notations for piecewise stationary problems
1.5 Generating fake piecewise stationary data
2 Python implementations of some statistical tests
2.1 A stupid detection test (pure random!)
2.2 Monitored
2.3 CUSUM
2.4 PHT
2.5 Gaussian GLR
2.6 Bernoulli GLR
2.7 List of all Python algorithms
3 Numba implementations of some statistical tests
4 Cython implementations of some statistical tests
5 Comparing the different implementations
5.1 Toy data
5.2 Checking time and memory efficiency?
5.3 Checking detection delay
5.4 Checking false alarm probabilities
5.5 Checking missed detection probabilities
6 Conclusions
Requirements and helper functions¶
Requirements¶
This notebook requires to have numpy and matplotlib installed. I’m also exploring usage of numba and cython later, so they are also needed.
In [1]:
!pip install watermark numpy scipy matplotlib numba cython tqdm
%load_ext watermark
%watermark -v -m -p numpy,scipy,matplotlib,numba,cython,tqdm -a "Lilian Besson"
Requirement already satisfied: watermark in /usr/local/lib/python3.6/dist-packages (1.5.0)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (1.14.5)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (1.1.0)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (3.0.2)
Requirement already satisfied: numba in /usr/local/lib/python3.6/dist-packages (0.37.0)
Requirement already satisfied: cython in /usr/local/lib/python3.6/dist-packages (0.27.2)
Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (4.19.6)
Requirement already satisfied: ipython in /usr/local/lib/python3.6/dist-packages (from watermark) (7.0.1)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (2.7.3)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (2.3.0)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (0.10.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (1.0.1)
Requirement already satisfied: llvmlite>=0.22.0.dev0 in /usr/local/lib/python3.6/dist-packages (from numba) (0.22.0)
Requirement already satisfied: simplegeneric>0.8 in /usr/local/lib/python3.6/dist-packages (from ipython->watermark) (0.8.1)
Requirement already satisfied: pygments in /usr/local/lib/python3.6/dist-packages (from ipython->watermark) (2.2.0)
Requirement already satisfied: pexpect; sys_platform != "win32" in /usr/local/lib/python3.6/dist-packages (from ipython->watermark) (4.6.0)
Requirement already satisfied: jedi>=0.10 in /usr/local/lib/python3.6/dist-packages (from ipython->watermark) (0.12.1)
Requirement already satisfied: backcall in /usr/local/lib/python3.6/dist-packages (from ipython->watermark) (0.1.0)
Requirement already satisfied: pickleshare in /usr/local/lib/python3.6/dist-packages (from ipython->watermark) (0.7.5)
Requirement already satisfied: traitlets>=4.2 in /usr/local/lib/python3.6/dist-packages (from ipython->watermark) (4.3.2)
Requirement already satisfied: decorator in /usr/local/lib/python3.6/dist-packages (from ipython->watermark) (4.3.0)
Requirement already satisfied: prompt-toolkit<2.1.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from ipython->watermark) (2.0.4)
Requirement already satisfied: setuptools>=18.5 in /usr/local/lib/python3.6/dist-packages (from ipython->watermark) (40.5.0)
Requirement already satisfied: six>=1.5 in /home/lilian/.local/lib/python3.6/site-packages (from python-dateutil>=2.1->matplotlib) (1.11.0)
Requirement already satisfied: ptyprocess>=0.5 in /usr/local/lib/python3.6/dist-packages (from pexpect; sys_platform != "win32"->ipython->watermark) (0.6.0)
Requirement already satisfied: parso>=0.3.0 in /usr/local/lib/python3.6/dist-packages (from jedi>=0.10->ipython->watermark) (0.3.1)
Requirement already satisfied: ipython-genutils in /usr/local/lib/python3.6/dist-packages (from traitlets>=4.2->ipython->watermark) (0.2.0)
Requirement already satisfied: wcwidth in /usr/local/lib/python3.6/dist-packages (from prompt-toolkit<2.1.0,>=2.0.0->ipython->watermark) (0.1.7)
Lilian Besson
CPython 3.6.7
IPython 7.0.1
numpy 1.14.5
scipy 1.1.0
matplotlib 3.0.2
numba 0.37.0
cython 0.27.2
tqdm 4.19.6
compiler : GCC 8.2.0
system : Linux
release : 4.15.0-38-generic
machine : x86_64
processor : x86_64
CPU cores : 4
interpreter: 64bit
In [2]:
import numpy as np
import matplotlib.pyplot as plt
import numba
In [3]:
def in_notebook():
"""Check if the code is running inside a Jupyter notebook or not. Cf. http://stackoverflow.com/a/39662359/.
>>> in_notebook()
False
"""
try:
shell = get_ipython().__class__.__name__
if shell == 'ZMQInteractiveShell': # Jupyter notebook or qtconsole?
return True
elif shell == 'TerminalInteractiveShell': # Terminal running IPython?
return False
else:
return False # Other type (?)
except NameError:
return False # Probably standard Python interpreter
In [4]:
if in_notebook():
from tqdm import tqdm_notebook as tqdm
print("Info: Using the Jupyter notebook version of the tqdm() decorator, tqdm_notebook() ...") # DEBUG
else:
from tqdm import tqdm
Info: Using the Jupyter notebook version of the tqdm() decorator, tqdm_notebook() ...
Mathematical notations for stationary problems¶
We consider \(K \geq 1\) arms, which are distributions \(\nu_k\). We focus on Bernoulli distributions, which are characterized by their means, \(\nu_k = \mathcal{B}(\mu_k)\) for \(\mu_k\in[0,1]\). A stationary bandit problem is defined here by the vector \([\mu_1,\dots,\mu_K]\).
For a fixed problem and a horizon \(T\in\mathbb{N}\), \(T\geq1\), we draw samples from the \(K\) distributions to get data: \(\forall t, r_k(t) \sim \nu_k\), ie, \(\mathbb{P}(r_k(t) = 1) = \mu_k\) and \(r_k(t) \in \{0,1\}\).
Generating fake stationary data¶
Here we give some examples of stationary problems and examples of data we can draw from them.
In [5]:
def bernoulli_samples(means, horizon=1000):
if np.size(means) == 1:
return np.random.binomial(1, means, size=horizon)
else:
results = np.zeros((np.size(means), horizon))
for i, mean in enumerate(means):
results[i] = np.random.binomial(1, mean, size=horizon)
return results
In [6]:
problem1 = [0.5]
bernoulli_samples(problem1, horizon=20)
Out[6]:
array([0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0])
For bandit problem with \(K \geq 2\) arms, the goal is to design an online learning algorithm that roughly do the following:
- For time \(t=1\) to \(t=T\) (unknown horizon)
- Algorithm \(A\) decide to draw arm \(A(t) \in\{1,\dots,K\}\),
- Get the reward \(r(t) = r_{A(t)}(t) \sim \nu_{A(t)}\) from the (Bernoulli) distribution of that arm,
- Give this observation of reward \(r(t)\) coming from arm \(A(t)\) to the algorithm,
- Update internal state of the algorithm
An algorithm is efficient if it obtains a high (expected) sum reward, ie, \(\sum_{t=1}^T r(t)\).
In [7]:
problem2 = [0.1, 0.5, 0.9]
bernoulli_samples(problem2, horizon=20)
Out[7]:
array([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0.],
[1., 1., 1., 1., 0., 1., 1., 1., 0., 0., 0., 0., 0., 0., 1., 1.,
0., 1., 1., 0.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 0., 1.]])
For instance on these data, the best arm is clearly the third one, with expected reward of \(\mu^* = \max_k \mu_k = 0.9\).
Mathematical notations for piecewise stationary problems¶
Now we fix the horizon \(T\in\mathbb{N}\), \(T\geq1\) and we also consider a set of \(\Upsilon_T\) break points, \(\tau_1,\dots,\tau_{\Upsilon_T} \in\{1,\dots,T\}\). We denote \(\tau_0 = 0\) and \(\tau_{\Upsilon_T+1} = T\) for convenience of notations. We can assume that breakpoints are far “enough” from each other, for instance that there exists an integer \(N\in\mathbb{N},N\geq1\) such that \(\min_{i=0}^{\Upsilon_T} \tau_{i+1} - \tau_i \geq N K\). That is, on each stationary interval, a uniform sampling of the \(K\) arms gives at least \(N\) samples by arm.
Now, in any stationary interval \([\tau_i + 1, \tau_{i+1}]\), the \(K \geq 1\) arms are distributions \(\nu_k^{(i)}\). We focus on Bernoulli distributions, which are characterized by their means, \(\nu_k^{(i)} := \mathcal{B}(\mu_k^{(i)})\) for \(\mu_k^{(i)}\in[0,1]\). A piecewise stationary bandit problem is defined here by the vector \([\mu_k^{(i)}]_{1\leq k \leq K, 1 \leq i \leq \Upsilon_T}\).
For a fixed problem and a horizon \(T\in\mathbb{N}\), \(T\geq1\), we draw samples from the \(K\) distributions to get data: \(\forall t, r_k(t) \sim \nu_k^{(i)}\) for \(i\) the unique index of stationary interval such that \(t\in[\tau_i + 1, \tau_{i+1}]\).
Generating fake piecewise stationary data¶
The format to define piecewise stationary problem will be the following. It is compact but generic!
The first example considers a unique arm, with 2 breakpoints uniformly spaced. - On the first interval, for instance from \(t=1\) to \(t=500\), that is \(\tau_1 = 500\), \(\mu_1^{(1)} = 0.1\), - On the second interval, for instance from \(t=501\) to \(t=1000\), that is \(\tau_2 = 100\), \(\mu_1^{(2)} = 0.5\), - On the third interval, for instance from \(t=1001\) to \(t=1500\), that \(\mu_1^{(3)} = 0.9\).
In [8]:
# With 1 arm only!
problem_piecewise_0 = lambda horizon: {
"listOfMeans": [
[0.1], # 0 to 499
[0.5], # 500 to 999
[0.8], # 1000 to 1499
],
"changePoints": [
int(0 * horizon / 1500.0),
int(500 * horizon / 1500.0),
int(1000 * horizon / 1500.0),
],
}
In [9]:
# With 2 arms
problem_piecewise_1 = lambda horizon: {
"listOfMeans": [
[0.1, 0.2], # 0 to 399
[0.1, 0.3], # 400 to 799
[0.5, 0.3], # 800 to 1199
[0.4, 0.3], # 1200 to 1599
[0.3, 0.9], # 1600 to end
],
"changePoints": [
int(0 * horizon / 2000.0),
int(400 * horizon / 2000.0),
int(800 * horizon / 2000.0),
int(1200 * horizon / 2000.0),
int(1600 * horizon / 2000.0),
],
}
In [10]:
# With 3 arms
problem_piecewise_2 = lambda horizon: {
"listOfMeans": [
[0.2, 0.5, 0.9], # 0 to 399
[0.2, 0.2, 0.9], # 400 to 799
[0.2, 0.2, 0.1], # 800 to 1199
[0.7, 0.2, 0.1], # 1200 to 1599
[0.7, 0.5, 0.1], # 1600 to end
],
"changePoints": [
int(0 * horizon / 2000.0),
int(400 * horizon / 2000.0),
int(800 * horizon / 2000.0),
int(1200 * horizon / 2000.0),
int(1600 * horizon / 2000.0),
],
}
In [11]:
# With 3 arms
problem_piecewise_3 = lambda horizon: {
"listOfMeans": [
[0.4, 0.5, 0.9], # 0 to 399
[0.5, 0.4, 0.7], # 400 to 799
[0.6, 0.3, 0.5], # 800 to 1199
[0.7, 0.2, 0.3], # 1200 to 1599
[0.8, 0.1, 0.1], # 1600 to end
],
"changePoints": [
int(0 * horizon / 2000.0),
int(400 * horizon / 2000.0),
int(800 * horizon / 2000.0),
int(1200 * horizon / 2000.0),
int(1600 * horizon / 2000.0),
],
}
Now we can write a utility function that transform this compact representation into a full list of means.
In [12]:
def getFullHistoryOfMeans(problem, horizon=2000):
"""Return the vector of mean of the arms, for a piece-wise stationary MAB.
- It is a numpy array of shape (nbArms, horizon).
"""
pb = problem(horizon)
listOfMeans, changePoints = pb['listOfMeans'], pb['changePoints']
nbArms = len(listOfMeans[0])
if horizon is None:
horizon = np.max(changePoints)
meansOfArms = np.ones((nbArms, horizon))
for armId in range(nbArms):
nbChangePoint = 0
for t in range(horizon):
if nbChangePoint < len(changePoints) - 1 and t >= changePoints[nbChangePoint + 1]:
nbChangePoint += 1
meansOfArms[armId][t] = listOfMeans[nbChangePoint][armId]
return meansOfArms
For examples :
In [13]:
getFullHistoryOfMeans(problem_piecewise_0, horizon=50)
Out[13]:
array([[0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
0.1, 0.1, 0.1, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5,
0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8,
0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8]])
In [14]:
getFullHistoryOfMeans(problem_piecewise_1, horizon=50)
Out[14]:
array([[0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5,
0.5, 0.5, 0.5, 0.5, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4,
0.4, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3],
[0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.3, 0.3, 0.3,
0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3,
0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3,
0.3, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9]])
In [15]:
getFullHistoryOfMeans(problem_piecewise_2, horizon=50)
Out[15]:
array([[0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2,
0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2,
0.2, 0.2, 0.2, 0.2, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7,
0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7],
[0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.2, 0.2, 0.2,
0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2,
0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2,
0.2, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5],
[0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9,
0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]])
In [16]:
getFullHistoryOfMeans(problem_piecewise_3, horizon=50)
Out[16]:
array([[0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.5, 0.5, 0.5,
0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6,
0.6, 0.6, 0.6, 0.6, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7,
0.7, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8],
[0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.4, 0.4, 0.4,
0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3,
0.3, 0.3, 0.3, 0.3, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2,
0.2, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1],
[0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.7, 0.7, 0.7,
0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.7, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5,
0.5, 0.5, 0.5, 0.5, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3,
0.3, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]])
And now we need to be able to generate samples from such distributions.
In [17]:
def piecewise_bernoulli_samples(problem, horizon=1000):
fullMeans = getFullHistoryOfMeans(problem, horizon=horizon)
nbArms, horizon = np.shape(fullMeans)
results = np.zeros((nbArms, horizon))
for i in range(nbArms):
mean_i = fullMeans[i, :]
for t in range(horizon):
mean_i_t = mean_i[t]
results[i, t] = np.random.binomial(1, mean_i_t)
return results
Examples:
In [18]:
getFullHistoryOfMeans(problem_piecewise_0, horizon=100)
piecewise_bernoulli_samples(problem_piecewise_0, horizon=100)
Out[18]:
array([[0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5,
0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5,
0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5,
0.5, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8,
0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8,
0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8]])
Out[18]:
array([[0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 1., 1., 1., 1., 0., 0., 1., 1., 0., 1., 0., 0., 0., 1., 0.,
0., 0., 1., 1., 1., 1., 0., 1., 1., 0., 1., 0., 1., 1., 1., 1.,
1., 0., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0., 1., 1.,
1., 1., 0., 1.]])
We easily spot the (approximate) location of the breakpoint!
Another example:
In [19]:
piecewise_bernoulli_samples(problem_piecewise_1, horizon=100)
Out[19]:
array([[0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1.,
0., 1., 1., 0., 0., 0., 1., 0., 0., 0., 0., 1., 0., 0., 0., 1.,
1., 1., 0., 1., 0., 0., 1., 0., 1., 1., 0., 0., 0., 1., 1., 1.,
1., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 1., 0., 0., 1., 0.,
1., 0., 1., 1.],
[1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 1., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 1., 0., 0., 1.,
1., 0., 0., 0., 1., 0., 0., 0., 1., 1., 1., 1., 0., 0., 1., 0.,
0., 0., 1., 0., 0., 0., 0., 1., 0., 0., 0., 0., 1., 1., 0., 1.,
1., 1., 0., 1., 1., 1., 0., 0., 0., 0., 1., 1., 0., 1., 0., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
0., 1., 0., 1.]])
Python implementations of some statistical tests¶
I will implement here the following statistical tests (and I give a link
to the implementation of the correspond bandit policy in my framework
`SMPyBandits
<https://smpybandits.github.io/>`__
- Monitored (based on a McDiarmid inequality), for Monitored-UCB or
`M-UCB
<>`__, - CUSUM, for
`CUSUM-UCB
<https://smpybandits.github.io/docs/Policies.CD_UCB.html?highlight=cusum#Policies.CD_UCB.CUSUM_IndexPolicy>`__, - PHT, for
`PHT-UCB
<https://smpybandits.github.io/docs/Policies.CD_UCB.html?highlight=cusum#Policies.CD_UCB.PHT_IndexPolicy>`__, - Gaussian GLR, for
`GaussianGLR-UCB
<https://smpybandits.github.io/docs/Policies.CD_UCB.html?highlight=glr#Policies.CD_UCB.GaussianGLR_IndexPolicy>`__, - Bernoulli GLR, for
`BernoulliGLR-UCB
<https://smpybandits.github.io/docs/Policies.CD_UCB.html?highlight=glr#Policies.CD_UCB.BernoulliGLR_IndexPolicy>`__.
A stupid detection test (pure random!)¶
Just to be sure that the test functions work as wanted, I start by writing a stupid change detection test, which is purely random!
In [20]:
def PurelyRandom(all_data, t, proba=0.5):
return np.random.random() < proba
Monitored¶
In [21]:
NB_ARMS = 1
WINDOW_SIZE = 80
In [22]:
def Monitored(all_data, t,
window_size=WINDOW_SIZE, threshold_b=None,
):
r""" A change is detected for the current arm if the following test is true:
.. math:: |\sum_{i=w/2+1}^{w} Y_i - \sum_{i=1}^{w/2} Y_i | > b ?