This notebook requires to have numpy
and matplotlib
installed.
One function needs a function from scipy.special
.
joblib
is used to have parallel computations (at the end).
The bottleneck performance of the main functions are very simple functions, for which we can write efficient versions using either numba.jit
or cython
.
!pip3 install watermark numpy scipy matplotlib joblib numba cython
%load_ext watermark
%watermark -v -m -p numpy,scipy,matplotlib,joblib,numba,cython -a "Lilian Besson and Emilie Kaufmann"
import numpy as np
try:
from tqdm import tqdm_notebook as tqdm
except:
def tqdm(iterator, *args, **kwargs):
return iterator
We consider $K \geq 1$ arms, which are distributions $\nu_k$. We focus on Bernoulli distributions, which are characterized by their means, $\nu_k = \mathcal{B}(\mu_k)$ for $\mu_k\in[0,1]$. A stationary bandit problem is defined here by the vector $[\mu_1,\dots,\mu_K]$.
For a fixed problem and a horizon $T\in\mathbb{N}$, $T\geq1$, we draw samples from the $K$ distributions to get data: $\forall t, r_k(t) \sim \nu_k$, ie, $\mathbb{P}(r_k(t) = 1) = \mu_k$ and $r_k(t) \in \{0,1\}$.
Here we give some examples of stationary problems and examples of data we can draw from them.
def bernoulli_samples(means, horizon=1000):
if np.size(means) == 1:
return np.random.binomial(1, means, size=horizon)
else:
results = np.zeros((np.size(means), horizon))
for i, mean in enumerate(means):
results[i] = np.random.binomial(1, mean, size=horizon)
return results
problem1 = [0.5]
bernoulli_samples(problem1, horizon=20)
%timeit bernoulli_samples(problem1, horizon=1000)
Now for Gaussian data:
sigma = 0.25 # Bernoulli are 1/4-sub Gaussian too!
def gaussian_samples(means, horizon=1000, sigma=sigma):
if np.size(means) == 1:
return np.random.normal(loc=means, scale=sigma, size=horizon)
else:
results = np.zeros((np.size(means), horizon))
for i, mean in enumerate(means):
results[i] = np.random.normal(loc=mean, scale=sigma, size=horizon)
return results
gaussian_samples(problem1, horizon=20)
%timeit gaussian_samples(problem1, horizon=1000)
For bandit problem with $K \geq 2$ arms, the goal is to design an online learning algorithm that roughly do the following:
An algorithm is efficient if it obtains a high (expected) sum reward, ie, $\sum_{t=1}^T r(t)$.
Note that I don't focus on bandit algorithm here.
problem2 = [0.1, 0.5, 0.9]
bernoulli_samples(problem2, horizon=20)
problem2 = [0.1, 0.5, 0.9]
gaussian_samples(problem2, horizon=20)
For instance on these data, the best arm is clearly the third one, with expected reward of $\mu^* = \max_k \mu_k = 0.9$.
Now we fix the horizon $T\in\mathbb{N}$, $T\geq1$ and we also consider a set of $\Upsilon_T$ break points, $\tau_1,\dots,\tau_{\Upsilon_T} \in\{1,\dots,T\}$. We denote $\tau_0 = 0$ and $\tau_{\Upsilon_T+1} = T$ for convenience of notations. We can assume that breakpoints are far "enough" from each other, for instance that there exists an integer $N\in\mathbb{N},N\geq1$ such that $\min_{i=0}^{\Upsilon_T} \tau_{i+1} - \tau_i \geq N K$. That is, on each stationary interval, a uniform sampling of the $K$ arms gives at least $N$ samples by arm.
Now, in any stationary interval $[\tau_i + 1, \tau_{i+1}]$, the $K \geq 1$ arms are distributions $\nu_k^{(i)}$. We focus on Bernoulli distributions, which are characterized by their means, $\nu_k^{(i)} := \mathcal{B}(\mu_k^{(i)})$ for $\mu_k^{(i)}\in[0,1]$. A piecewise stationary bandit problem is defined here by the vector $[\mu_k^{(i)}]_{1\leq k \leq K, 1 \leq i \leq \Upsilon_T}$.
For a fixed problem and a horizon $T\in\mathbb{N}$, $T\geq1$, we draw samples from the $K$ distributions to get data: $\forall t, r_k(t) \sim \nu_k^{(i)}$ for $i$ the unique index of stationary interval such that $t\in[\tau_i + 1, \tau_{i+1}]$.
The format to define piecewise stationary problem will be the following. It is compact but generic!
The first example considers a unique arm, with 2 breakpoints uniformly spaced.
# With 1 arm only!
problem_piecewise_0 = lambda horizon: {
"listOfMeans": [
[0.1], # 0 to 499
[0.5], # 500 to 999
[0.8], # 1000 to 1499
],
"changePoints": [
int(0 * horizon / 1500.0),
int(500 * horizon / 1500.0),
int(1000 * horizon / 1500.0),
],
}
# With 2 arms
problem_piecewise_1 = lambda horizon: {
"listOfMeans": [
[0.1, 0.2], # 0 to 399
[0.1, 0.3], # 400 to 799
[0.5, 0.3], # 800 to 1199
[0.4, 0.3], # 1200 to 1599
[0.3, 0.9], # 1600 to end
],
"changePoints": [
int(0 * horizon / 2000.0),
int(400 * horizon / 2000.0),
int(800 * horizon / 2000.0),
int(1200 * horizon / 2000.0),
int(1600 * horizon / 2000.0),
],
}
# With 3 arms
problem_piecewise_2 = lambda horizon: {
"listOfMeans": [
[0.2, 0.5, 0.9], # 0 to 399
[0.2, 0.2, 0.9], # 400 to 799
[0.2, 0.2, 0.1], # 800 to 1199
[0.7, 0.2, 0.1], # 1200 to 1599
[0.7, 0.5, 0.1], # 1600 to end
],
"changePoints": [
int(0 * horizon / 2000.0),
int(400 * horizon / 2000.0),
int(800 * horizon / 2000.0),
int(1200 * horizon / 2000.0),
int(1600 * horizon / 2000.0),
],
}
# With 3 arms
problem_piecewise_3 = lambda horizon: {
"listOfMeans": [
[0.4, 0.5, 0.9], # 0 to 399
[0.5, 0.4, 0.7], # 400 to 799
[0.6, 0.3, 0.5], # 800 to 1199
[0.7, 0.2, 0.3], # 1200 to 1599
[0.8, 0.1, 0.1], # 1600 to end
],
"changePoints": [
int(0 * horizon / 2000.0),
int(400 * horizon / 2000.0),
int(800 * horizon / 2000.0),
int(1200 * horizon / 2000.0),
int(1600 * horizon / 2000.0),
],
}
Now we can write a utility function that transform this compact representation into a full list of means.
def getFullHistoryOfMeans(problem, horizon=2000):
"""Return the vector of mean of the arms, for a piece-wise stationary MAB.
- It is a numpy array of shape (nbArms, horizon).
"""
pb = problem(horizon)
listOfMeans, changePoints = pb['listOfMeans'], pb['changePoints']
nbArms = len(listOfMeans[0])
if horizon is None:
horizon = np.max(changePoints)
meansOfArms = np.ones((nbArms, horizon))
for armId in range(nbArms):
nbChangePoint = 0
for t in range(horizon):
if nbChangePoint < len(changePoints) - 1 and t >= changePoints[nbChangePoint + 1]:
nbChangePoint += 1
meansOfArms[armId][t] = listOfMeans[nbChangePoint][armId]
return meansOfArms
For examples :
getFullHistoryOfMeans(problem_piecewise_0, horizon=50)
getFullHistoryOfMeans(problem_piecewise_1, horizon=50)
getFullHistoryOfMeans(problem_piecewise_2, horizon=50)
getFullHistoryOfMeans(problem_piecewise_3, horizon=50)
And now we need to be able to generate samples from such distributions.
def piecewise_bernoulli_samples(problem, horizon=1000):
fullMeans = getFullHistoryOfMeans(problem, horizon=horizon)
nbArms, horizon = np.shape(fullMeans)
results = np.zeros((nbArms, horizon))
for i in range(nbArms):
mean_i = fullMeans[i, :]
for t in range(horizon):
mean_i_t = max(0, min(1, mean_i[t])) # crop to [0, 1] !
results[i, t] = np.random.binomial(1, mean_i_t)
return results
def piecewise_gaussian_samples(problem, horizon=1000, sigma=sigma):
fullMeans = getFullHistoryOfMeans(problem, horizon=horizon)
nbArms, horizon = np.shape(fullMeans)
results = np.zeros((nbArms, horizon))
for i in range(nbArms):
mean_i = fullMeans[i, :]
for t in range(horizon):
mean_i_t = mean_i[t]
results[i, t] = np.random.normal(loc=mean_i_t, scale=sigma, size=1)
return results
Examples:
getFullHistoryOfMeans(problem_piecewise_0, horizon=100)
piecewise_bernoulli_samples(problem_piecewise_0, horizon=100)
piecewise_gaussian_samples(problem_piecewise_0, horizon=100)
We easily spot the (approximate) location of the breakpoint!
Another example:
piecewise_bernoulli_samples(problem_piecewise_1, horizon=100)
piecewise_gaussian_samples(problem_piecewise_1, horizon=20)
I will implement here the following statistical tests.
I give a link to the implementation of the correspond bandit policy in my framework SMPyBandits
M-UCB
,CUSUM-UCB
,PHT-UCB
,GaussianGLR-UCB
,BernoulliGLR-UCB
.class ChangePointDetector(object):
def __init__(self, **kwargs):
self._kwargs = kwargs
for key, value in kwargs.items():
setattr(self, key, value)
def __str__(self):
return f"{self.__class__.__name__}{f'({repr(self._kwargs)})' if self._kwargs else ''}"
def detect(self, all_data, t):
raise NotImplementedError
Having classes is simply to be able to pretty print the algorithms when they have parameters:
print(ChangePointDetector())
print(ChangePointDetector(w=10, b=1))
Monitored
¶It uses a McDiarmid inequality. For a (pair) window size $w\in\mathbb{N}$ and a threshold $b\in\mathbb{R}^+$. At time $t$, if there is at least $w$ data in the data vector $(X_i)_i$, then let $Y$ denote the last $w$ data. A change is detected if $$ |\sum_{i=w/2+1}^{w} Y_i - \sum_{i=1}^{w/2} Y_i | > b ? $$
NB_ARMS = 1
WINDOW_SIZE = 80
import numba
class Monitored(ChangePointDetector):
def __init__(self, window_size=WINDOW_SIZE, threshold_b=None):
super().__init__(window_size=window_size, threshold_b=threshold_b)
def __str__(self):
if self.threshold_b:
return f"Monitored($w={self.window_size:.3g}$, $b={self.threshold_b:.3g}$)"
else:
latexname = r"\sqrt{\frac{w}{2} \log(2 T^2)}"
return f"Monitored($w={self.window_size:.3g}$, $b={latexname}$)"
def detect(self, all_data, t):
r""" A change is detected for the current arm if the following test is true:
.. math:: |\sum_{i=w/2+1}^{w} Y_i - \sum_{i=1}^{w/2} Y_i | > b ?
- where :math:`Y_i` is the i-th data in the latest w data from this arm (ie, :math:`X_k(t)` for :math:`t = n_k - w + 1` to :math:`t = n_k` current number of samples from arm k).
- where :attr:`threshold_b` is the threshold b of the test, and :attr:`window_size` is the window-size w.
"""
data = all_data[:t]
# don't try to detect change if there is not enough data!
if len(data) < self.window_size:
return False
# compute parameters
horizon = len(all_data)
threshold_b = self.threshold_b
if threshold_b is None:
threshold_b = np.sqrt(self.window_size/2 * np.log(2 * NB_ARMS * horizon**2))
last_w_data = data[-self.window_size:]
sum_first_half = np.sum(last_w_data[:self.window_size//2])
sum_second_half = np.sum(last_w_data[self.window_size//2:])
return abs(sum_first_half - sum_second_half) > threshold_b
CUSUM
¶The two-sided CUSUM algorithm, from [Page, 1954], works like this:
$$ s_k^- = (y_k - \hat{u}_0 - \varepsilon) 1(k > M),\\ s_k^+ = (\hat{u}_0 - y_k - \varepsilon) 1(k > M),\\ g_k^+ = \max(0, g_{k-1}^+ + s_k^+),\\ g_k^- = \max(0, g_{k-1}^- + s_k^-). $$
threshold_h
is the threshold of the test,M
the min number of observation between change points.#: Precision of the test.
EPSILON = 0.5
#: Default value of :math:`\lambda`.
LAMBDA = 1
#: Hypothesis on the speed of changes: between two change points, there is at least :math:`M * K` time steps, where K is the number of arms, and M is this constant.
MIN_NUMBER_OF_OBSERVATION_BETWEEN_CHANGE_POINT = 50
MAX_NB_RANDOM_EVENTS = 1
from scipy.special import comb
def compute_h__CUSUM(horizon,
verbose=False,
M=MIN_NUMBER_OF_OBSERVATION_BETWEEN_CHANGE_POINT,
max_nb_random_events=MAX_NB_RANDOM_EVENTS,
nbArms=1,
epsilon=EPSILON,
lmbda=LAMBDA,
):
r""" Compute the values :math:`C_1^+, C_1^-, C_1, C_2, h` from the formulas in Theorem 2 and Corollary 2 in the paper."""
T = int(max(1, horizon))
UpsilonT = int(max(1, max_nb_random_events))
K = int(max(1, nbArms))
C1_minus = np.log(((4 * epsilon) / (1-epsilon)**2) * comb(M, int(np.floor(2 * epsilon * M))) * (2 * epsilon)**M + 1)
C1_plus = np.log(((4 * epsilon) / (1+epsilon)**2) * comb(M, int(np.ceil(2 * epsilon * M))) * (2 * epsilon)**M + 1)
C1 = min(C1_minus, C1_plus)
if C1 == 0: C1 = 1 # FIXME
h = 1/C1 * np.log(T / UpsilonT)
return h
class CUSUM(ChangePointDetector):
def __init__(self,
epsilon=EPSILON,
M=MIN_NUMBER_OF_OBSERVATION_BETWEEN_CHANGE_POINT,
threshold_h=None,
):
assert 0 < epsilon < 1, f"Error: epsilon for CUSUM must be in (0, 1) but is {epsilon}."
super().__init__(epsilon=epsilon, M=M, threshold_h=threshold_h)
def __str__(self):
if self.threshold_h:
return fr"CUSUM($\varepsilon={self.epsilon:.3g}$, $M={self.M}$, $h={self.threshold_h:.3g}$)"
else:
return fr"CUSUM($\varepsilon={self.epsilon:.3g}$, $M={self.M}$, $h=$'auto')"
def detect(self, all_data, t):
r""" Detect a change in the current arm, using the two-sided CUSUM algorithm [Page, 1954].
- For each *data* k, compute:
.. math::
s_k^- &= (y_k - \hat{u}_0 - \varepsilon) 1(k > M),\\
s_k^+ &= (\hat{u}_0 - y_k - \varepsilon) 1(k > M),\\
g_k^+ &= \max(0, g_{k-1}^+ + s_k^+),\\
g_k^- &= \max(0, g_{k-1}^- + s_k^-).
- The change is detected if :math:`\max(g_k^+, g_k^-) > h`, where :attr:`threshold_h` is the threshold of the test,
- And :math:`\hat{u}_0 = \frac{1}{M} \sum_{k=1}^{M} y_k` is the mean of the first M samples, where M is :attr:`M` the min number of observation between change points.
"""
data = all_data[:t]
# compute parameters
horizon = len(all_data)
threshold_h = self.threshold_h
if self.threshold_h is None:
threshold_h = compute_h__CUSUM(horizon, self.M, 1, epsilon=self.epsilon)
gp, gm = 0, 0
# First we use the first M samples to calculate the average :math:`\hat{u_0}`.
u0hat = np.mean(data[:self.M])
for k in range(self.M + 1, len(data)):
y_k = data[k]
sp = u0hat - y_k - self.epsilon # no need to multiply by (k > self.M)
sm = y_k - u0hat - self.epsilon # no need to multiply by (k > self.M)
gp = max(0, gp + sp)
gm = max(0, gm + sm)
if max(gp, gm) >= threshold_h:
return True
return False
PHT
¶The two-sided CUSUM algorithm, from [Hinkley, 1971], works like this:
$$ s_k^- = y_k - \hat{y}_k - \varepsilon,\\ s_k^+ = \hat{y}_k - y_k - \varepsilon,\\ g_k^+ = \max(0, g_{k-1}^+ + s_k^+),\\ g_k^- = \max(0, g_{k-1}^- + s_k^-). $$
threshold_h
is the threshold of the test,class PHT(ChangePointDetector):
def __init__(self,
epsilon=EPSILON,
M=MIN_NUMBER_OF_OBSERVATION_BETWEEN_CHANGE_POINT,
threshold_h=None,
):
assert 0 < epsilon < 1, f"Error: epsilon for CUSUM must be in (0, 1) but is {epsilon}."
super().__init__(epsilon=epsilon, M=M, threshold_h=threshold_h)
def __str__(self):
if self.threshold_h:
return fr"PHT($\varepsilon={self.epsilon:.3g}$, $M={self.M}$, $h={self.threshold_h:.3g}$)"
else:
return fr"PHT($\varepsilon={self.epsilon:.3g}$, $M={self.M}$, $h=$'auto')"
def detect(self, all_data, t):
r""" Detect a change in the current arm, using the two-sided PHT algorithm [Hinkley, 1971].
- For each *data* k, compute:
.. math::
s_k^- &= y_k - \hat{y}_k - \varepsilon,\\
s_k^+ &= \hat{y}_k - y_k - \varepsilon,\\
g_k^+ &= \max(0, g_{k-1}^+ + s_k^+),\\
g_k^- &= \max(0, g_{k-1}^- + s_k^-).
- The change is detected if :math:`\max(g_k^+, g_k^-) > h`, where :attr:`threshold_h` is the threshold of the test,
- And :math:`\hat{y}_k = \frac{1}{k} \sum_{s=1}^{k} y_s` is the mean of the first k samples.
"""
data = all_data[:t]
# compute parameters
horizon = len(all_data)
threshold_h = self.threshold_h
if threshold_h is None:
threshold_h = compute_h__CUSUM(horizon, self.M, 1, epsilon=self.epsilon)
gp, gm = 0, 0
y_k_hat = 0
# First we use the first M samples to calculate the average :math:`\hat{u_0}`.
for k, y_k in enumerate(data):
y_k_hat = (k * y_k_hat + y_k) / (k + 1) # XXX smart formula to update the mean!
sp = y_k_hat - y_k - self.epsilon
sm = y_k - y_k_hat - self.epsilon
gp = max(0, gp + sp)
gm = max(0, gm + sm)
if max(gp, gm) >= threshold_h:
return True
return False
Gaussian GLR
¶The Generalized Likelihood Ratio test (GLR) works with a one-dimensional exponential family, for which we have a function kl
such that if $\mu_1,\mu_2$ are the means of two distributions $\nu_1,\nu_2$, then $\mathrm{KL}(\mathcal{D}(\nu_1), \mathcal{D}(\nu_1))=$ kl
$(\mu_1,\mu_2)$.
For each time step $s$ between $t_0=0$ and $t$, compute: $$G^{\mathcal{N}_1}_{t_0:s:t} = (s-t_0+1) \mathrm{kl}(\mu_{t_0,s}, \mu_{t_0,t}) + (t-s) \mathrm{kl}(\mu_{s+1,t}, \mu_{t_0,t}).$$
The change is detected if there is a time $s$ such that $G^{\mathcal{N}_1}_{t_0:s:t} > b(t_0, s, t, \delta)$, where $b(t_0, s, t, \delta)=$ threshold_h
is the threshold of the test,
The threshold is computed as: $$ b(t_0, s, t, \delta):= \left(1 + \frac{1}{t - t_0 + 1}\right) 2 \log\left(\frac{2 (t - t_0) \sqrt{(t - t_0) + 2}}{\delta}\right).$$
Another threshold we want to check is the following: $$ b(t_0, s, t, \delta):= \log\left(\frac{(s - t_0 + 1) (t - s)}{\delta}\right).$$
from math import log, isinf
def compute_c__GLR_0(t0, s, t, horizon=None, delta=None):
r""" Compute the values :math:`c` from the corollary of of Theorem 2 from ["Sequential change-point detection: Laplace concentration of scan statistics and non-asymptotic delay bounds", O.-A. Maillard, 2018].
- The threshold is computed as:
.. math:: h := \left(1 + \frac{1}{t - t_0 + 1}\right) 2 \log\left(\frac{2 (t - t_0) \sqrt{(t - t_0) + 2}}{\delta}\right).
"""
if delta is None:
T = int(max(1, horizon))
delta = 1.0 / T
t_m_t0 = abs(t - t0)
c = (1 + (1 / (t_m_t0 + 1.0))) * 2 * log((2 * t_m_t0 * np.sqrt(t_m_t0 + 2)) / delta)
if c < 0 and isinf(c): c = float('+inf')
return c
from math import log, isinf
def compute_c__GLR(t0, s, t, horizon=None, delta=None):
r""" Compute the values :math:`c` from the corollary of of Theorem 2 from ["Sequential change-point detection: Laplace concentration of scan statistics and non-asymptotic delay bounds", O.-A. Maillard, 2018].
- The threshold is computed as:
.. math:: h := \log\left(\frac{(s - t_0 + 1) (t - s)}{\delta}\right).
"""
if delta is None:
T = int(max(1, horizon))
delta = 1.0 / T
arg = (s - t0 + 1) * (t - s) / delta
if arg <= 0: c = float('+inf')
else: c = log(arg)
return c
For Gaussian distributions of known variance, the Kullback-Leibler divergence is easy to compute:
Kullback-Leibler divergence for Gaussian distributions of means $x$ and $y$ and variances $\sigma^2_x = \sigma^2_y$, $\nu_1 = \mathcal{N}(x, \sigma_x^2)$ and $\nu_2 = \mathcal{N}(y, \sigma_x^2)$ is:
$$\mathrm{KL}(\nu_1, \nu_2) = \frac{(x - y)^2}{2 \sigma_y^2} + \frac{1}{2}\left( \frac{\sigma_x^2}{\sigma_y^2} - 1 \log\left(\frac{\sigma_x^2}{\sigma_y^2}\right) \right).$$
def klGauss(x, y, sig2x=0.25):
r""" Kullback-Leibler divergence for Gaussian distributions of means ``x`` and ``y`` and variances ``sig2x`` and ``sig2y``, :math:`\nu_1 = \mathcal{N}(x, \sigma_x^2)` and :math:`\nu_2 = \mathcal{N}(y, \sigma_x^2)`:
.. math:: \mathrm{KL}(\nu_1, \nu_2) = \frac{(x - y)^2}{2 \sigma_y^2} + \frac{1}{2}\left( \frac{\sigma_x^2}{\sigma_y^2} - 1 \log\left(\frac{\sigma_x^2}{\sigma_y^2}\right) \right).
See https://en.wikipedia.org/wiki/Normal_distribution#Other_properties
- sig2y = sig2x (same variance).
"""
return (x - y) ** 2 / (2. * sig2x)
class GaussianGLR(ChangePointDetector):
def __init__(self, mult_threshold_h=1, delta=None):
super().__init__(mult_threshold_h=mult_threshold_h, delta=delta)
def __str__(self):
return r"Gaussian-GLR($h_0={}$, $\delta={}$)".format(
f"{self.mult_threshold_h:.3g}" if self.mult_threshold_h is not None else 'auto',
f"{self.delta:.3g}" if self.delta is not None else 'auto',
)
def detect(self, all_data, t):
r""" Detect a change in the current arm, using the Generalized Likelihood Ratio test (GLR) and the :attr:`kl` function.
- For each *time step* :math:`s` between :math:`t_0=0` and :math:`t`, compute:
.. math::
G^{\mathcal{N}_1}_{t_0:s:t} = (s-t_0+1)(t-s) \mathrm{kl}(\mu_{s+1,t}, \mu_{t_0,s}) / (t-t_0+1).
- The change is detected if there is a time :math:`s` such that :math:`G^{\mathcal{N}_1}_{t_0:s:t} > h`, where :attr:`threshold_h` is the threshold of the test,
- And :math:`\mu_{a,b} = \frac{1}{b-a+1} \sum_{s=a}^{b} y_s` is the mean of the samples between :math:`a` and :math:`b`.
"""
data = all_data[:t]
t0 = 0
horizon = len(all_data)
# compute parameters
mean_all = np.mean(data[t0 : t+1])
mean_before = 0
mean_after = mean_all
for s in range(t0, t):
# DONE okay this is efficient we don't compute the same means too many times!
y = data[s]
mean_before = (s * mean_before + y) / (s + 1)
mean_after = ((t + 1 - s + t0) * mean_after - y) / (t - s + t0)
kl_before = klGauss(mean_before, mean_all)
kl_after = klGauss(mean_after, mean_all)
threshold_h = self.mult_threshold_h * compute_c__GLR(t0, s, t, horizon=horizon, delta=self.delta)
glr = (s - t0 + 1) * kl_before + (t - s) * kl_after
if glr >= threshold_h:
return True
return False
Bernoulli GLR
¶The same GLR algorithm but using the Bernoulli KL, given by:
$$\mathrm{KL}(\mathcal{B}(x), \mathcal{B}(y)) = x \log(\frac{x}{y}) + (1-x) \log(\frac{1-x}{1-y}).$$
import cython
%load_ext cython
def klBern(x: float, y: float) -> float:
r""" Kullback-Leibler divergence for Bernoulli distributions. https://en.wikipedia.org/wiki/Bernoulli_distribution#Kullback.E2.80.93Leibler_divergence
.. math:: \mathrm{KL}(\mathcal{B}(x), \mathcal{B}(y)) = x \log(\frac{x}{y}) + (1-x) \log(\frac{1-x}{1-y})."""
x = min(max(x, 1e-6), 1 - 1e-6)
y = min(max(y, 1e-6), 1 - 1e-6)
return x * log(x / y) + (1 - x) * log((1 - x) / (1 - y))
%timeit klBern(np.random.random(), np.random.random())
%%cython --annotate
from libc.math cimport log
eps = 1e-6 #: Threshold value: everything in [0, 1] is truncated to [eps, 1 - eps]
def klBern_cython(float x, float y) -> float:
r""" Kullback-Leibler divergence for Bernoulli distributions. https://en.wikipedia.org/wiki/Bernoulli_distribution#Kullback.E2.80.93Leibler_divergence
.. math:: \mathrm{KL}(\mathcal{B}(x), \mathcal{B}(y)) = x \log(\frac{x}{y}) + (1-x) \log(\frac{1-x}{1-y})."""
x = min(max(x, 1e-6), 1 - 1e-6)
y = min(max(y, 1e-6), 1 - 1e-6)
return x * log(x / y) + (1 - x) * log((1 - x) / (1 - y))
%timeit klBern_cython(np.random.random(), np.random.random())
Now the class, with this optimized kl function.
class BernoulliGLR(ChangePointDetector):
def __init__(self, mult_threshold_h=1, delta=None):
super().__init__(mult_threshold_h=mult_threshold_h, delta=delta)
def __str__(self):
return r"Bernoulli-GLR($h_0={}$, $\delta={}$)".format(
f"{self.mult_threshold_h:.3g}" if self.mult_threshold_h is not None else 'auto',
f"{self.delta:.3g}" if self.delta is not None else 'auto',
)
def detect(self, all_data, t):
r""" Detect a change in the current arm, using the Generalized Likelihood Ratio test (GLR) and the :attr:`kl` function.
- For each *time step* :math:`s` between :math:`t_0=0` and :math:`t`, compute:
.. math::
G^{\mathcal{N}_1}_{t_0:s:t} = (s-t_0+1)(t-s) \mathrm{kl}(\mu_{s+1,t}, \mu_{t_0,s}) / (t-t_0+1).
- The change is detected if there is a time :math:`s` such that :math:`G^{\mathcal{N}_1}_{t_0:s:t} > h`, where :attr:`threshold_h` is the threshold of the test,
- And :math:`\mu_{a,b} = \frac{1}{b-a+1} \sum_{s=a}^{b} y_s` is the mean of the samples between :math:`a` and :math:`b`.
"""
data = all_data[:t]
t0 = 0
horizon = len(all_data)
# compute parameters
mean_all = np.mean(data[t0 : t+1])
mean_before = 0
mean_after = mean_all
for s in range(t0, t):
# DONE okay this is efficient we don't compute the same means too many times!
y = data[s]
mean_before = (s * mean_before + y) / (s + 1)
mean_after = ((t + 1 - s + t0) * mean_after - y) / (t - s + t0)
kl_before = klBern(mean_before, mean_all)
kl_after = klBern(mean_after, mean_all)
threshold_h = self.mult_threshold_h * compute_c__GLR(t0, s, t, horizon=horizon, delta=self.delta)
glr = (s - t0 + 1) * kl_before + (t - s) * kl_after
if glr >= threshold_h:
return True
return False
Sub-Gaussian GLR
¶A slightly different GLR algorithm for non-parametric sub-Gaussian distributions. We assume the distributions $\nu^1$ and $\nu^2$ to be $\sigma^2$-sub Gaussian, for a known value of $\sigma\in\mathbb{R}^+$, and if we consider a confidence level $\delta\in(0,1)$ (typically, it is set to $\frac{1}{T}$ if the horizon $T$ is known, or $\delta=\delta_t=\frac{1}{t^2}$ to have $\sum_{t=1}{T} \delta_t < +\infty$).
Then we consider the following test: the non-parametric sub-Gaussian Generalized Likelihood Ratio test (GLR) works like this:
For each time step $s$ between $t_0=0$ and $t$, compute: $$G^{\text{sub-}\sigma}_{t_0:s:t} = |\mu_{t_0,s} - \mu_{s+1,t}|.$$
The change is detected if there is a time $s$ such that $G^{\text{sub-}\sigma}_{t_0:s:t} > b_{t_0}(s,t,\delta)$, where $b_{t_0}(s,t,\delta)$ is the threshold of the test,
The threshold is computed as either the "joint" variant: $$b^{\text{joint}}_{t_0}(s,t,\delta) := \sigma \sqrt{ \left(\frac{1}{s-t_0+1} + \frac{1}{t-s}\right) \left(1 + \frac{1}{t-t_0+1}\right) 2 \log\left( \frac{2(t-t_0)\sqrt{t-t_0+2}}{\delta} \right)}.$$ or the "disjoint" variant:
$$b^{\text{disjoint}}_{t_0}(s,t,\delta) := \sqrt{2} \sigma \sqrt{ \frac{1 + \frac{1}{s - t_0 + 1}}{s - t_0 + 1} \log\left( \frac{4 \sqrt{s - t_0 + 2}}{\delta}\right) } + \sqrt{ \frac{1 + \frac{1}{t - s + 1}}{t - s + 1} \log\left( \frac{4 (t - t_0) \sqrt{t - s + 1}}{\delta}\right) }.$$
# Default confidence level?
DELTA = 0.01
# By default, assume distributions are 0.25-sub Gaussian, like Bernoulli
# or any distributions with support on [0,1]
SIGMA = 0.25
# Whether to use the joint or disjoint threshold function
JOINT = True
from math import log, sqrt
def threshold_SubGaussianGLR_joint(t0, s, t, delta=DELTA, sigma=SIGMA):
return sigma * sqrt(
(1.0 / (s - t0 + 1) + 1.0/(t - s)) * (1.0 + 1.0/(t - t0+1))
* 2 * max(0, log(( 2 * (t - t0) * sqrt(t - t0 + 2)) / delta ))
)
from math import log, sqrt
def threshold_SubGaussianGLR_disjoint(t0, s, t, delta=DELTA, sigma=SIGMA):
return np.sqrt(2) * sigma * (sqrt(
((1.0 + (1.0 / (s - t0 + 1))) / (s - t0 + 1)) * max(0, log( (4 * sqrt(s - t0 + 2)) / delta ))
) + sqrt(
((1.0 + (1.0 / (t - s + 1))) / (t - s + 1)) * max(0, log( (4 * (t - t0) * sqrt(t - s + 1)) / delta ))
))
def threshold_SubGaussianGLR(t0, s, t, delta=DELTA, sigma=SIGMA, joint=JOINT):
if joint:
return threshold_SubGaussianGLR_joint(t0, s, t, delta, sigma=sigma)
else:
return threshold_SubGaussianGLR_disjoint(t0, s, t, delta, sigma=sigma)
And now we can write the CD algorithm:
class SubGaussianGLR(ChangePointDetector):
def __init__(self, delta=DELTA, sigma=SIGMA, joint=JOINT):
super().__init__(delta=delta, sigma=sigma, joint=joint)
def __str__(self):
return fr"SubGaussian-GLR($\delta=${self.delta:.3g}, $\sigma=${self.sigma:.3g}, {'joint' if self.joint else 'disjoint'})"
def detect(self, all_data, t):
r""" Detect a change in the current arm, using the non-parametric sub-Gaussian Generalized Likelihood Ratio test (GLR) works like this:
- For each *time step* :math:`s` between :math:`t_0=0` and :math:`t`, compute:
.. math:: G^{\text{sub-}\sigma}_{t_0:s:t} = |\mu_{t_0,s} - \mu_{s+1,t}|.
- The change is detected if there is a time :math:`s` such that :math:`G^{\text{sub-}\sigma}_{t_0:s:t} > b_{t_0}(s,t,\delta)`, where :math:`b_{t_0}(s,t,\delta)` is the threshold of the test,
The threshold is computed as:
.. math:: b_{t_0}(s,t,\delta) := \sigma \sqrt{ \left(\frac{1}{s-t_0+1} + \frac{1}{t-s}\right) \left(1 + \frac{1}{t-t_0+1}\right) 2 \log\left( \frac{2(t-t_0)\sqrt{t-t_0+2}}{\delta} \right)}.
- And :math:`\mu_{a,b} = \frac{1}{b-a+1} \sum_{s=a}^{b} y_s` is the mean of the samples between :math:`a` and :math:`b`.
"""
data = all_data[:t]
t0 = 0
horizon = len(all_data)
delta = self.delta
if delta is None:
delta = 1.0 / max(1, horizon)
mean_before = 0
mean_after = np.mean(data[t0 : t+1])
for s in range(t0, t):
# DONE okay this is efficient we don't compute the same means too many times!
y = data[s]
mean_before = (s * mean_before + y) / (s + 1)
mean_after = ((t + 1 - s + t0) * mean_after - y) / (t - s + t0)
# compute threshold
threshold = threshold_SubGaussianGLR(t0, s, t, delta=delta, sigma=self.sigma, joint=self.joint)
glr = abs(mean_before - mean_after)
if glr >= threshold:
# print(f"DEBUG: t0 = {t0}, t = {t}, s = {s}, horizon = {horizon}, delta = {delta}, threshold = {threshold} and mu(s+1, t) = {mu(s+1, t)}, and mu(t0, s) = {mu(t0, s)}, and and glr = {glr}.")
return True
return False
all_CD_algorithms = [
Monitored, CUSUM, PHT,
GaussianGLR, BernoulliGLR, SubGaussianGLR
]
I now want to compare, on a simple non stationary problem, the efficiency of the different change detection algorithms, in terms of:
But most importantly, in terms of:
def str_of_CDAlgorithm(CDAlgorithm, *args, **kwargs):
detector = CDAlgorithm(*args, **kwargs)
return str(detector)
# With 1 arm only! With 1 change only!
toy_problem_piecewise = lambda firstMean, secondMean, tau: lambda horizon: {
"listOfMeans": [
[firstMean], # 0 to 499
[secondMean], # 500 to 999
],
"changePoints": [
0,
tau
],
}
def get_toy_data(firstMean=0.5, secondMean=0.9, tau=None, horizon=100, gaussian=False):
if tau is None:
tau = horizon // 2
elif isinstance(tau, float):
tau = int(tau * horizon)
problem = toy_problem_piecewise(firstMean, secondMean, tau)
if gaussian:
data = piecewise_gaussian_samples(problem, horizon=horizon)
else:
data = piecewise_bernoulli_samples(problem, horizon=horizon)
data = data.reshape(horizon)
return data
It is now very easy to get data and "see" manually on the data the location of the breakpoint:
get_toy_data(firstMean=0.1, secondMean=0.9, tau=0.5, horizon=100)
get_toy_data(firstMean=0.1, secondMean=0.9, tau=0.2, horizon=100)
get_toy_data(firstMean=0.1, secondMean=0.4, tau=0.5, horizon=100)
And similarly for Gaussian data, we clearly see a difference around the middle of the vector:
get_toy_data(firstMean=0.1, secondMean=0.9, tau=0.5, horizon=20, gaussian=True)
Of course, we want to check that detecting the change becomes harder when:
# Cf. https://stackoverflow.com/a/36313217/
from IPython.display import display, Markdown
def check_onemeasure(measure, name,
firstMean=0.1,
secondMean=0.4,
tau=0.5,
horizon=100,
repetitions=50,
gaussian=False,
unit="",
list_of_args_kwargs=None,
CDAlgorithms=None,
):
if CDAlgorithms is None:
CDAlgorithms = tuple(all_CD_algorithms)
if isinstance(tau, float):
tau = int(tau * horizon)
print(f"\nGenerating toy {'Gaussian' if gaussian else 'Bernoulli'} data for mu^1 = {firstMean}, mu^2 = {secondMean}, tau = {tau} and horizon = {horizon}...")
results = np.zeros((repetitions, len(CDAlgorithms)))
list_of_args = [tuple() for _ in CDAlgorithms]
list_of_kwargs = [dict() for _ in CDAlgorithms]
for rep in tqdm(range(repetitions), desc="Repetitions"):
data = get_toy_data(firstMean=firstMean, secondMean=secondMean, tau=tau, horizon=horizon, gaussian=gaussian)
for i, CDAlgorithm in enumerate(CDAlgorithms):
if list_of_args_kwargs:
list_of_args[i], list_of_kwargs[i] = list_of_args_kwargs[i]
results[rep, i] = measure(data, tau, CDAlgorithm, *list_of_args[i], **list_of_kwargs[i])
# print and display a table of the results
markdown_text = """
| Algorithm | {} |
|------|------|
{}
""".format(name, "\n".join([
"| {} | ${:.3g}${} |".format(
str_of_CDAlgorithm(CDAlgorithm, *list_of_args[i], **list_of_kwargs[i]),
mean_result, unit
)
for CDAlgorithm, mean_result in zip(CDAlgorithms, np.mean(results, axis=0))
]))
print(markdown_text)
display(Markdown(markdown_text))
return results
def eval_CDAlgorithm(CDAlgorithm, data, t, *args, **kwargs):
detector = CDAlgorithm(*args, **kwargs)
return detector.detect(data, t)
I don't really care about memory efficiency, so I won't check it.
import time
def time_efficiency(data, tau, CDAlgorithm, *args, **kwargs):
startTime = time.time()
horizon = len(data)
for t in range(0, horizon + 1):
_ = eval_CDAlgorithm(CDAlgorithm, data, t, *args, **kwargs)
endTime = time.time()
return endTime - startTime
To benchmark each of the CD algorithm, we can use the line_profiler
module and it's lprun
magic.
!pip3 install line_profiler >/dev/null
%lprun -f Monitored.detect check_onemeasure(time_efficiency, "Time", firstMean=0.1, secondMean=0.4, tau=0.5, horizon=200, unit=" seconds", CDAlgorithms=[Monitored])
$20\%$ of the time is spent computing the threshold, and $55\%$ is spent computing the two sums: we cannot optimize these!
%lprun -f CUSUM.detect check_onemeasure(time_efficiency, "Time", firstMean=0.1, secondMean=0.4, tau=0.5, horizon=200, unit=" seconds", CDAlgorithms=[CUSUM])
$10\%$ of the time is spent computing the threshold, and about $10\%$ to $25\%$ are spent on the few maths computations that can hardly be optimized.
%lprun -f PHT.detect check_onemeasure(time_efficiency, "Time", firstMean=0.1, secondMean=0.4, tau=0.5, horizon=200, unit=" seconds", CDAlgorithms=[PHT])
$10\%$ of the time is spent computing the threshold, and about $10\%$ to $25\%$ are spent on the few maths computations that can hardly be optimized.
%lprun -f GaussianGLR.detect check_onemeasure(time_efficiency, "Time", firstMean=0.1, secondMean=0.4, tau=0.5, horizon=200, unit=" seconds", CDAlgorithms=[GaussianGLR])
$30\%$ of the time is spent computing the threshold (at every step it must be recomputed!), and about $10\%$ to $25\%$ are spent on the few maths computations that can hardly be optimized.
%lprun -f BernoulliGLR.detect check_onemeasure(time_efficiency, "Time", firstMean=0.1, secondMean=0.4, tau=0.5, horizon=200, unit=" seconds", CDAlgorithms=[BernoulliGLR])
$30\%$ of the time is spent computing the threshold (at every step it must be recomputed!), and about $10\%$ to $25\%$ are spent on the few maths computations that can hardly be optimized.
%lprun -f SubGaussianGLR.detect check_onemeasure(time_efficiency, "Time", firstMean=0.1, secondMean=0.4, tau=0.5, horizon=200, unit=" seconds", CDAlgorithms=[SubGaussianGLR])
$30\%$ of the time is spent computing the threshold (at every step it must be recomputed!), and about $10\%$ to $25\%$ are spent on the few maths computations that can hardly be optimized.
For examples:
_ = check_onemeasure(time_efficiency, "Time", firstMean=0.1, secondMean=0.4, tau=0.5, horizon=100, unit=" seconds")
_ = check_onemeasure(time_efficiency, "Time", firstMean=0.1, secondMean=0.4, tau=0.5, horizon=100, unit=" seconds", gaussian=True)
The two `GLR` are very slow, compared to the `Monitored` approach, and slow compared to `CUSUM` or `PHT`!
%%time
_ = check_onemeasure(time_efficiency, "Time", firstMean=0.1, secondMean=0.2, tau=0.5, horizon=100, unit=" seconds")
Let's compare the results for $T=100$, $T=500$, $T=1000$:
%%time
results_T100 = check_onemeasure(time_efficiency, "Time", firstMean=0.1, secondMean=0.9, tau=0.5, horizon=100, unit=" seconds")
%%time
results_T500 = check_onemeasure(time_efficiency, "Time", firstMean=0.1, secondMean=0.9, tau=0.5, horizon=500, unit=" seconds")
%%time
results_T1000 = check_onemeasure(time_efficiency, "Time", firstMean=0.1, secondMean=0.9, tau=0.5, horizon=1000, unit=" seconds")
%%time
results_T2000 = check_onemeasure(time_efficiency, "Time", firstMean=0.1, secondMean=0.9, tau=0.5, horizon=2000, repetitions=10, unit=" seconds")
%%time
results_T2500 = check_onemeasure(time_efficiency, "Time", firstMean=0.1, secondMean=0.9, tau=0.5, horizon=2500, repetitions=10, unit=" seconds")
The three `GLR` and `CUSUM` and `PHT` are comparable, and the Bernoulli `GLR` is essentially the most efficient, except for `Monitored` which is the only one to be way faster.
When going from a horizon of $T=100$ to $T=500$ and $T=1000$, we see that Monitored
time complexity is essentially constant, while the complexity of CUSUM
, PHT
and all the GLR
tests blows up quadratically:
data_X = np.array([100, 500, 1000, 2000, 2500])
data_Y = [
[
np.mean(results_T100, axis=0)[i],
np.mean(results_T500, axis=0)[i],
np.mean(results_T1000, axis=0)[i],
np.mean(results_T2000, axis=0)[i],
np.mean(results_T2500, axis=0)[i],
]
for i in range(len(all_CD_algorithms))
]
import matplotlib.pyplot as plt
fig = plt.figure()
for i, alg in enumerate(all_CD_algorithms):
plt.plot(data_X, data_Y[i], 'o-', label=alg.__name__, lw=3)
plt.legend()
plt.xlabel("Time horizon $T$")
plt.ylabel("Time complexity in seconds")
plt.title("Comparison of time complexity efficiency of different CD algorithms")
plt.show()
We can fit time complexity $C^{Algorithm}(T)$ as a function of $T$ in the form $C(T) \simeq a T^b + c$. Using the function scipy.optimize.curve_fit
, it is very easy:
from scipy.optimize import curve_fit
def time_complexity_general_shape(T, a, b, c):
return a * T**b + c
for i, alg in enumerate(all_CD_algorithms):
popt, _ = curve_fit(time_complexity_general_shape, data_X, data_Y[i])
a, b, c = popt
print(f"For algorithm {alg.__name__},\n\ta = {a:.3g}, b = {b:.3g}, c = {c:.3g} is the best fit for C(T) = a T^b + c")
We check indeed that (roughly) $C(T^{1.3}) = \mathcal{O}(T)$ for Monitored
, and $C(T) = \mathcal{O}(T^2)$ for all the other algorithms!
def detection_delay(data, tau, CDAlgorithm, *args, **kwargs):
horizon = len(data)
if isinstance(tau, float): tau = int(tau * horizon)
for t in range(tau, horizon + 1):
if eval_CDAlgorithm(CDAlgorithm, data, t, *args, **kwargs):
return t - tau
return horizon - tau
Now we can check the detection delay for our different algorithms.
For examples:
_ = check_onemeasure(detection_delay, "Mean detection delay", firstMean=0.1, secondMean=0.4, tau=0.5, horizon=100)
A lot of detection delay are large (ie. it was detected too late), with not enough data! `SubGaussian-GLR` seems to be the only one "fast enough", but it triggers false alarms and not correct detections!
%%time
_ = check_onemeasure(detection_delay, "Mean detection delay", firstMean=0.1, secondMean=0.9, tau=0.5, horizon=1000)
A very small detection delay, with enough data (a delay of 40 is small when there is $500$ data of $\nu_1$ and $\nu_2$) !
def false_alarm(data, tau, CDAlgorithm, *args, **kwargs):
horizon = len(data)
if isinstance(tau, float): tau = int(tau * horizon)
for t in range(0, tau):
if eval_CDAlgorithm(CDAlgorithm, data, t, *args, **kwargs):
return True
return False
Now we can check the false alarm probabilities for our different algorithms.
For examples:
_ = check_onemeasure(false_alarm, "Mean false alarm rate", firstMean=0.1, secondMean=0.4, tau=0.5, horizon=100)
A lot of false alarm for `BernoulliGLR` but not the others, with not enough data!
%%time
_ = check_onemeasure(false_alarm, "Mean false alarm rate", firstMean=0.1, secondMean=0.9, tau=0.5, horizon=1000)
No false alarm, with enough data! Only `Sub-Gaussian-GLR` has a lot of false alarms, even with enough data!
def missed_detection(data, tau, CDAlgorithm, *args, **kwargs):
horizon = len(data)
if isinstance(tau, float): tau = int(tau * horizon)
for t in range(tau, horizon + 1):
if eval_CDAlgorithm(CDAlgorithm, data, t, *args, **kwargs):
return False
return True
Now we can check the false alarm probabilities for our different algorithms.
For examples:
_ = check_onemeasure(missed_detection, "Mean missed detection rate", firstMean=0.1, secondMean=0.4, tau=0.5, horizon=100)
A lot of missed detection, with not enough data!
%%time
_ = check_onemeasure(missed_detection, "Mean missed detection rate", firstMean=0.1, secondMean=0.9, tau=0.5, horizon=1000)
No missed detection, with enough data!
Fix an algorithm, e.g., Monitored
, then consider one of the quantities defined above (time efficiency, delay, false alarm or missed detection probas).
Now, a piecewise stationary problem is characterized by the parameters $\mu_1$, $\Delta = |\mu_2 - \mu_1|$, and $\tau$ and $T$.
Of course, if any of $\tau$ or $\Delta$ are too small, detection is impossible. I want to display a $2$D image view, showing on $x$-axis a grid of values of $\Delta$, on $y$-axis a grid of values of $\tau$, and on the $2$D image, a color-scale to show the detection delay (for instance).
mu_1 = 0.5
max_mu_2 = 1
nb_values_Delta = 20
values_Delta = np.linspace(0, max_mu_2 - mu_1, nb_values_Delta)
horizon = T = 1000
min_tau = 10
max_tau = T - min_tau
step = 50
values_tau = np.arange(min_tau, max_tau + 1, step)
nb_values_tau = len(values_tau)
print(f"This will give a grid of {nb_values_Delta} x {nb_values_tau} = {nb_values_Delta * nb_values_tau} values of Delta and tau to explore.")
And now the function:
def check2D_onemeasure(measure,
CDAlgorithm,
values_Delta,
values_tau,
firstMean=mu_1,
horizon=horizon,
repetitions=10,
verbose=True,
gaussian=False,
n_jobs=1,
*args, **kwargs,
):
print(f"\nExploring {measure.__name__} for algorithm {str_of_CDAlgorithm(CDAlgorithm, *args, **kwargs)} mu^1 = {firstMean} and horizon = {horizon}...")
nb_values_Delta = len(values_Delta)
nb_values_tau = len(values_tau)
print(f"with {nb_values_Delta} values for Delta, and {nb_values_tau} values for tau, and {repetitions} repetitions.")
results = np.zeros((nb_values_Delta, nb_values_tau))
for i, delta in tqdm(enumerate(values_Delta), desc="Delta s", leave=False):
for j, tau in tqdm(enumerate(values_tau), desc="Tau s", leave=False):
secondMean = firstMean + delta
if isinstance(tau, float): tau = int(tau * horizon)
# now the random Monte Carlo repetitions
for rep in tqdm(range(repetitions), desc="Repetitions", leave=False):
data = get_toy_data(firstMean=firstMean, secondMean=secondMean, tau=tau, horizon=horizon, gaussian=gaussian)
result = measure(data, tau, CDAlgorithm, *args, **kwargs)
results[i, j] += result
results[i, j] /= repetitions
if verbose: print(f"For delta = {delta} ({i}th), tau = {tau} ({j}th), mean result = {results[i, j]}")
return results
joblib.Parallel
to use multi-core computations¶I want to (try to) use joblib.Parallel
to run the "repetitions" for loop in parallel, for instance on 4 cores on my machine.
from joblib import Parallel, delayed
# Tries to know number of CPU
try:
from multiprocessing import cpu_count
CPU_COUNT = cpu_count() #: Number of CPU on the local machine
except ImportError:
CPU_COUNT = 1
print(f"Info: using {CPU_COUNT} jobs in parallel!")
We can rewrite the check2D_onemeasure
function to run some loops in parallel.
def check2D_onemeasure_parallel(measure,
CDAlgorithm,
values_Delta,
values_tau,
firstMean=mu_1,
horizon=horizon,
repetitions=10,
verbose=1,
gaussian=False,
n_jobs=CPU_COUNT,
*args, **kwargs,
):
print(f"\nExploring {measure.__name__} for algorithm {str_of_CDAlgorithm(CDAlgorithm, *args, **kwargs)} mu^1 = {firstMean} and horizon = {horizon}...")
nb_values_Delta = len(values_Delta)
nb_values_tau = len(values_tau)
print(f"with {nb_values_Delta} values for Delta, and {nb_values_tau} values for tau, and {repetitions} repetitions.")
results = np.zeros((nb_values_Delta, nb_values_tau))
def delayed_measure(i, delta, j, tau, rep):
secondMean = firstMean + delta
if isinstance(tau, float): tau = int(tau * horizon)
data = get_toy_data(firstMean=firstMean, secondMean=secondMean, tau=tau, horizon=horizon, gaussian=gaussian)
return i, j, measure(data, tau, CDAlgorithm, *args, **kwargs)
# now the random Monte Carlo repetitions
for i, j, result in Parallel(n_jobs=n_jobs, verbose=int(verbose))(
delayed(delayed_measure)(i, delta, j, tau, rep)
for i, delta in tqdm(enumerate(values_Delta), desc="Delta s ||", leave=False)
for j, tau in tqdm(enumerate(values_tau), desc="Tau s ||", leave=False)
for rep in tqdm(range(repetitions), desc="Repetitions||", leave=False)
):
results[i, j] += result
results /= repetitions
if verbose:
for i, delta in enumerate(values_Delta):
for j, tau in enumerate(values_tau):
print(f"For delta = {delta} ({i}th), tau = {tau} ({j}th), mean result = {results[i, j]}")
return results
Monitored
%%time
_ = check2D_onemeasure(time_efficiency,
Monitored,
values_Delta=[0.05, 0.25, 0.5],
values_tau=[1/10, 1/4, 2/4, 3/4, 9/10],
firstMean=0.5,
horizon=1000,
repetitions=100)
%%time
_ = check2D_onemeasure_parallel(time_efficiency,
Monitored,
values_Delta=[0.05, 0.25, 0.5],
values_tau=[1/10, 1/4, 2/4, 3/4, 9/10],
firstMean=0.5,
horizon=1000,
repetitions=100,
n_jobs=4)
%%time
_ = check2D_onemeasure(detection_delay,
Monitored,
values_Delta=[0.05, 0.25, 0.5],
values_tau=[1/10, 1/4, 2/4, 3/4, 9/10],
firstMean=0.5,
horizon=1000,
repetitions=100)
%%time
_ = check2D_onemeasure_parallel(detection_delay,
Monitored,
values_Delta=[0.05, 0.25, 0.5],
values_tau=[1/10, 1/4, 2/4, 3/4, 9/10],
firstMean=0.5,
horizon=1000,
repetitions=100
)
%%time
_ = check2D_onemeasure_parallel(false_alarm,
Monitored,
values_Delta=[0.05, 0.25, 0.5],
values_tau=[1/10, 1/4, 2/4, 3/4, 9/10],
firstMean=0.5,
horizon=1000,
repetitions=100,
)
%%time
_ = check2D_onemeasure_parallel(missed_detection,
Monitored,
values_Delta=[0.05, 0.25, 0.5],
values_tau=[1/10, 1/4, 2/4, 3/4, 9/10],
firstMean=0.5,
horizon=1000,
repetitions=100,
)
import matplotlib as mpl
FIGSIZE = (19.80, 10.80) #: Figure size, in inches!
mpl.rcParams['figure.figsize'] = FIGSIZE
import matplotlib.ticker as ticker
import matplotlib.pyplot as plt
#: List of formats to use for saving the figures, by default.
#: It is a smart idea to save in both a raster and vectorial formats
FORMATS = ('png', 'pdf')
import pickle
from datetime import datetime
from os.path import getsize, getatime
def show_and_save(showplot=True, savefig=None, formats=FORMATS, pickleit=False, fig=None):
""" Maximize the window if need to show it, save it if needed, and then show it or close it.
- Inspired by https://tomspur.blogspot.fr/2015/08/publication-ready-figures-with.html#Save-the-figure
"""
if savefig is not None:
if pickleit and fig is not None:
form = "pickle"
path = "{}.{}".format(savefig, form)
print("Saving raw figure with format {}, to file '{}'...".format(form, path)) # DEBUG
with open(path, "bw") as f:
pickle_dump(fig, f)
print(" Saved! '{}' created of size '{}b', at '{:%c}' ...".format(path, getsize(path), datetime.fromtimestamp(getatime(path))))
for form in formats:
path = "{}.{}".format(savefig, form)
print("Saving figure with format {}, to file '{}'...".format(form, path)) # DEBUG
plt.savefig(path, bbox_inches=None)
print(" Saved! '{}' created of size '{}b', at '{:%c}' ...".format(path, getsize(path), datetime.fromtimestamp(getatime(path))))
try:
plt.show() if showplot else plt.close()
except (TypeError, AttributeError):
print("Failed to show the figure for some unknown reason...") # DEBUG
Now the function:
def view2D_onemeasure(measure, name,
CDAlgorithm,
values_Delta,
values_tau,
firstMean=mu_1,
horizon=horizon,
repetitions=10,
gaussian=False,
n_jobs=CPU_COUNT,
savefig=None,
*args, **kwargs,
):
check = check2D_onemeasure_parallel if n_jobs > 1 else check2D_onemeasure
results = check(measure, CDAlgorithm,
values_Delta, values_tau,
firstMean=firstMean,
horizon=horizon,
repetitions=repetitions,
verbose=False,
gaussian=gaussian,
n_jobs=n_jobs,
*args, **kwargs,
)
fig = plt.figure()
plt.matshow(results)
plt.colorbar(shrink=0.7)
plt.locator_params(axis='x', nbins=1+len(values_tau))
plt.locator_params(axis='y', nbins=len(values_Delta))
ax = plt.gca()
# https://stackoverflow.com/a/19972993/
loc = ticker.MultipleLocator(base=1.0) # this locator puts ticks at regular intervals
ax.xaxis.set_major_locator(loc)
ax.xaxis.set_ticks_position('bottom')
def y_fmt(tick_value, pos): return '{:.3g}'.format(tick_value)
ax.yaxis.set_major_formatter(ticker.FuncFormatter(y_fmt))
ax.yaxis.set_major_locator(loc)
# hack to display the ticks labels as the actual values
if np.max(values_tau) <= 1:
values_tau = np.floor(np.asarray(values_tau) * horizon)
values_tau = list(np.asarray(values_tau, dtype=int))
values_Delta = np.round(values_Delta, 3)
ax.set_xticklabels([0] + list(values_tau)) # hack: the first label is not displayed??
ax.set_yticklabels([0] + list(values_Delta)) # hack: the first label is not displayed??
plt.title(fr"{name} for algorithm {str_of_CDAlgorithm(CDAlgorithm, *args, **kwargs)}, for $T={horizon}$, {'Gaussian' if gaussian else 'Bernoulli'} data and $\mu_1={firstMean:.3g}$ and ${repetitions}$ repetitions")
plt.xlabel(r"Value of $\tau$ time of breakpoint")
plt.ylabel(r"Value of gap $\Delta = |\mu_2 - \mu_1|$")
show_and_save(savefig=savefig)
return fig
%%time
_ = view2D_onemeasure(detection_delay, "Detection delay",
Monitored,
values_Delta=[0.05, 0.1, 0.25, 0.4, 0.5],
values_tau=[1/10, 1/4, 2/4, 3/4, 9/10],
firstMean=0.5,
horizon=1000,
repetitions=50,
savefig="Detection_delay_of_Monitored__Bernoulli_T1000_N50__5deltas__5taus"
)
%%time
_ = view2D_onemeasure(false_alarm, "False alarm probability",
Monitored,
values_Delta=[0.05, 0.1, 0.25, 0.4, 0.5],
values_tau=[1/10, 1/4, 2/4, 3/4, 9/10],
firstMean=0.5,
horizon=1000,
repetitions=50,
savefig="False_alarm_of_Monitored__Bernoulli_T1000_N50__5deltas__5taus"
)
%%time
_ = view2D_onemeasure(missed_detection, "Missed detection probability",
Monitored,
values_Delta=[0.05, 0.1, 0.25, 0.4, 0.5],
values_tau=[1/10, 1/4, 2/4, 3/4, 9/10],
firstMean=0.5,
horizon=1000,
repetitions=50,
savefig="Missed_detection_of_Monitored__Bernoulli_T1000_N50__5deltas__5taus"
)
%%time
_ = view2D_onemeasure(false_alarm, "False alarm probability",
Monitored,
values_Delta=[0.05, 0.1, 0.25, 0.4, 0.5],
values_tau=[1/10, 1/4, 2/4, 3/4, 9/10],
firstMean=0.5,
horizon=1000,
repetitions=50,
gaussian=True,
savefig="False_alarm_of_Monitored__Gaussian_T1000_N50__5deltas__5taus"
)
%%time
_ = view2D_onemeasure(missed_detection, "Missed detection probability",
Monitored,
values_Delta=[0.05, 0.1, 0.25, 0.4, 0.5],
values_tau=[1/10, 1/4, 2/4, 3/4, 9/10],
firstMean=0.5,
horizon=1000,
repetitions=50,
gaussian=True,
savefig="Missed_detection_of_Monitored__Gaussian_T1000_N50__5deltas__5taus"
)
CUSUM
¶%%time
_ = view2D_onemeasure(detection_delay, "Detection delay",
CUSUM,
values_Delta=[0.05, 0.1, 0.25, 0.4, 0.5],
values_tau=[1/10, 1/4, 2/4, 3/4, 9/10],
firstMean=0.5,
horizon=1000,
repetitions=10,
savefig="Detection_delay_of_CUSUM__Bernoulli_T1000_N10__5deltas__5taus",
)
%%time
_ = view2D_onemeasure(false_alarm, "False alarm probability",
CUSUM,
values_Delta=[0.05, 0.1, 0.25, 0.4, 0.5],
values_tau=[1/10, 1/4, 2/4, 3/4, 9/10],
firstMean=0.5,
horizon=1000,
repetitions=10,
savefig="False_alarm_of_CUSUM__Bernoulli_T1000_N10__5deltas__5taus",
)
%%time
_ = view2D_onemeasure(missed_detection, "Missed detection probability",
CUSUM,
values_Delta=[0.05, 0.1, 0.25, 0.4, 0.5],
values_tau=[1/10, 1/4, 2/4, 3/4, 9/10],
firstMean=0.5,
horizon=1000,
repetitions=10,
savefig="Missed_detection_of_CUSUM__Bernoulli_T1000_N10__5deltas__5taus",
)
firstMean = mu_1 = 0.5
max_mu_2 = 1
nb_values_Delta = 20
min_delta = 0.15
max_delta = max_mu_2 - mu_1
epsilon = 0.03
values_Delta = np.linspace(min_delta, (1 - epsilon) * max_delta, nb_values_Delta)
print(f"Values of delta: {values_Delta}")
horizon = T = 1000
min_tau = 50
max_tau = T - min_tau
step = 50
values_tau = np.arange(min_tau, max_tau + 1, step)
nb_values_tau = len(values_tau)
print(f"Values of tau: {values_tau}")
print(f"This will give a grid of {nb_values_Delta} x {nb_values_tau} = {nb_values_Delta * nb_values_tau} values of Delta and tau to explore.")
Monitored
¶%%time
_ = view2D_onemeasure(detection_delay, "Detection delay",
Monitored,
values_Delta=values_Delta,
values_tau=values_tau,
firstMean=firstMean,
horizon=horizon,
repetitions=50,
savefig="Detection_delay_of_CUSUM__Bernoulli_T1000_N50__20deltas__19taus",
)
%%time
_ = view2D_onemeasure(false_alarm, "False alarm probability",
Monitored,
values_Delta=values_Delta,
values_tau=values_tau,
firstMean=firstMean,
horizon=horizon,
repetitions=50,
savefig="False_alarm_of_CUSUM__Bernoulli_T1000_N50__20deltas__19taus",
)
%%time
_ = view2D_onemeasure(missed_detection, "Missed detection probability",
Monitored,
values_Delta=values_Delta,
values_tau=values_tau,
firstMean=firstMean,
horizon=horizon,
repetitions=50,
savefig="Missed_detection_of_CUSUM__Bernoulli_T1000_N50__20deltas__19taus",
)
Monitored
for Gaussian data¶%%time
_ = view2D_onemeasure(detection_delay, "Detection delay",
Monitored,
values_Delta=values_Delta,
values_tau=values_tau,
firstMean=firstMean,
horizon=horizon,
repetitions=50,
gaussian=True,
savefig="Detection_delay_of_Monitored__Gaussian_T1000_N50__20deltas__19taus",
)
%%time
_ = view2D_onemeasure(missed_detection, "Missed detection probability",
Monitored,
values_Delta=values_Delta,
values_tau=values_tau,
firstMean=firstMean,
horizon=horizon,
repetitions=50,
gaussian=True,
savefig="Missed_detection_of_Monitored__Gaussian_T1000_N50__20deltas__19taus",
)
CUSUM
¶%%time
_ = view2D_onemeasure(detection_delay, "Detection delay",
CUSUM,
values_Delta=values_Delta,
values_tau=values_tau,
firstMean=firstMean,
horizon=horizon,
repetitions=10,
savefig="Detection_delay_of_CUSUM__Bernoulli_T1000_N10__20deltas__19taus",
)
PHT
¶%%time
_ = view2D_onemeasure(detection_delay, "Detection delay",
PHT,
values_Delta=values_Delta,
values_tau=values_tau,
firstMean=firstMean,
horizon=horizon,
repetitions=5,
savefig="Detection_delay_of_PHT__Bernoulli_T1000_N5__20deltas__19taus",
)
Bernoulli GLR
¶%%time
_ = view2D_onemeasure(detection_delay, "Detection delay",
BernoulliGLR,
values_Delta=values_Delta,
values_tau=values_tau,
firstMean=firstMean,
horizon=horizon,
repetitions=10,
savefig="Detection_delay_of_BernoulliGLR__Bernoulli_T1000_N10__20deltas__19taus",
)
%%time
_ = view2D_onemeasure(detection_delay, "Detection delay",
BernoulliGLR,
values_Delta=values_Delta,
values_tau=values_tau,
firstMean=firstMean,
horizon=horizon,
repetitions=10,
gaussian=True,
savefig="Detection_delay_of_BernoulliGLR__Gaussian_T1000_N10__20deltas__19taus",
)
Gaussian GLR
¶%%time
_ = view2D_onemeasure(detection_delay, "Detection delay",
GaussianGLR,
values_Delta=values_Delta,
values_tau=values_tau,
firstMean=firstMean,
horizon=horizon,
repetitions=10,
savefig="Detection_delay_of_GaussianGLR__Bernoulli_T1000_N10__20deltas__19taus",
)
%%time
_ = view2D_onemeasure(detection_delay, "Detection delay",
GaussianGLR,
values_Delta=values_Delta,
values_tau=values_tau,
firstMean=firstMean,
horizon=horizon,
repetitions=1,
gaussian=True,
savefig="Detection_delay_of_GaussianGLR__Gaussian_T1000_N1__20deltas__19taus",
)
Sub-Gaussian GLR
¶%%time
_ = view2D_onemeasure(detection_delay, "Detection delay",
SubGaussianGLR,
values_Delta=values_Delta,
values_tau=values_tau,
firstMean=firstMean,
horizon=horizon,
repetitions=10,
savefig="Detection_delay_of_SubGaussianGLR__Bernoulli_T1000_N10__20deltas__19taus",
)
%%time
_ = view2D_onemeasure(false_alarm, "False alarm",
SubGaussianGLR,
values_Delta=values_Delta,
values_tau=values_tau,
firstMean=firstMean,
horizon=horizon,
repetitions=10,
savefig="False_alarm__SubGaussianGLR__Bernoulli_T1000_N10__20deltas__19taus",
)
With this tuning ($\delta=0.01$), the Sub-Gaussian GLR
almost always gives a false alarm! It detects too soon!
For Gaussian data:
%%time
_ = view2D_onemeasure(detection_delay, "Detection delay",
SubGaussianGLR,
values_Delta=values_Delta,
values_tau=values_tau,
firstMean=firstMean,
horizon=horizon,
repetitions=10,
gaussian=True,
savefig="Detection_delay__SubGaussianGLR__Gaussian_T1000_N10__20deltas__19taus",
)
%%time
_ = view2D_onemeasure(false_alarm, "False alarm",
SubGaussianGLR,
values_Delta=values_Delta,
values_tau=values_tau,
firstMean=firstMean,
horizon=horizon,
repetitions=1,
gaussian=True,
savefig="False_alarm__SubGaussianGLR__Gaussian_T1000_N1__20deltas__19taus",
)
For another, simpler problem.
horizon = T = 200
min_tau = 10
max_tau = T - min_tau
step = 10
values_tau = np.arange(min_tau, max_tau + 1, step)
nb_values_tau = len(values_tau)
print(f"Values of tau: {values_tau}")
firstMean = mu_1 = -1.0
max_mu_2 = 1.0
nb_values_Delta = nb_values_tau
max_delta = max_mu_2 - mu_1
epsilon = 0.01
values_Delta = np.linspace(epsilon * max_delta, (1 - epsilon) * max_delta, nb_values_Delta)
print(f"Values of delta: {values_Delta}")
print(f"This will give a grid of {nb_values_Delta} x {nb_values_tau} = {nb_values_Delta * nb_values_tau} values of Delta and tau to explore.")
%%time
_ = view2D_onemeasure(detection_delay, "Detection delay",
SubGaussianGLR,
values_Delta=values_Delta,
values_tau=values_tau,
firstMean=firstMean,
horizon=horizon,
repetitions=10,
gaussian=True,
savefig="Detection_delay__SubGaussianGLR__Gaussian_T1000_N10__19deltas__19taus",
)
%%time
_ = view2D_onemeasure(false_alarm, "False alarm probability",
SubGaussianGLR,
values_Delta=values_Delta,
values_tau=values_tau,
firstMean=firstMean,
horizon=horizon,
repetitions=10,
gaussian=True,
savefig="False_alarm__SubGaussianGLR__Gaussian_T1000_N10__19deltas__19taus",
)
%%time
_ = view2D_onemeasure(missed_detection, "Missed detection probability",
SubGaussianGLR,
values_Delta=values_Delta,
values_tau=values_tau,
firstMean=firstMean,
horizon=horizon,
repetitions=10,
gaussian=True,
savefig="Missed_detection__SubGaussianGLR__Gaussian_T1000_N10__19deltas__19taus",
)
We consider again a problem with $T=1000$ samples, first coming from a distribution of mean $\mu^1 = 0.25$ then from a second distribution of mean $\mu^2 = 0.75$ (largest gap, $\Delta = 0.5$). We consider also a single breakpoint located at $\tau = \frac{1}{2} T = 500$, ie the algorithm will observe $500$ samples from $\nu^1$ then $500$ from $\nu^2$.
We can consider Bernoulli or Gaussian distributions.
horizon = 200
firstMean = mu_1 = 0.25
secondMean = mu_2 = 0.75
gap = mu_2 - mu_1
tau = 0.5
def explore_parameters(measure,
CDAlgorithm,
tau=tau,
firstMean=mu_1,
secondMean=mu_2,
horizon=horizon,
repetitions=10,
verbose=True,
gaussian=False,
n_jobs=1,
list_of_args_kwargs=tuple(),
mean=True,
):
if isinstance(tau, float): tau = int(tau * horizon)
print(f"\nExploring {measure.__name__} for algorithm {CDAlgorithm}, mu^1 = {firstMean}, mu^2 = {secondMean}, and horizon = {horizon}, tau = {tau}...")
nb_of_args_kwargs = len(list_of_args_kwargs)
print(f"with {nb_of_args_kwargs} values for args, kwargs, and {repetitions} repetitions.")
results = np.zeros(nb_of_args_kwargs) if mean else np.zeros((repetitions, nb_of_args_kwargs))
for i, argskwargs in tqdm(enumerate(list_of_args_kwargs), desc="ArgsKwargs", leave=False):
args, kwargs = argskwargs
# now the random Monte Carlo repetitions
for j, rep in tqdm(enumerate(range(repetitions)), desc="Repetitions", leave=False):
data = get_toy_data(firstMean=firstMean, secondMean=secondMean, tau=tau, horizon=horizon, gaussian=gaussian)
result = measure(data, tau, CDAlgorithm, *args, **kwargs)
if mean:
results[i] += result
else:
results[j, i] = result
if mean:
results[i] /= repetitions
if verbose: print(f"For args = {args}, kwargs = {kwargs} ({i}th), {'mean' if mean else 'vector of'} result = {results[i]}")
return results
I want to (try to) use joblib.Parallel
to run the "repetitions" for loop in parallel, for instance on 4 cores on my machine.
def explore_parameters_parallel(measure,
CDAlgorithm,
tau=tau,
firstMean=mu_1,
secondMean=mu_2,
horizon=horizon,
repetitions=10,
verbose=True,
gaussian=False,
n_jobs=CPU_COUNT,
list_of_args_kwargs=tuple(),
mean=True,
):
if isinstance(tau, float): tau = int(tau * horizon)
print(f"\nExploring {measure.__name__} for algorithm {CDAlgorithm}, mu^1 = {firstMean}, mu^2 = {secondMean}, and horizon = {horizon}, tau = {tau}...")
nb_of_args_kwargs = len(list_of_args_kwargs)
print(f"with {nb_of_args_kwargs} values for args, kwargs, and {repetitions} repetitions.")
results = np.zeros(nb_of_args_kwargs) if mean else np.zeros((repetitions, nb_of_args_kwargs))
def delayed_measure(i, j, argskwargs):
args, kwargs = argskwargs
data = get_toy_data(firstMean=firstMean, secondMean=secondMean, tau=tau, horizon=horizon, gaussian=gaussian)
return i, j, measure(data, tau, CDAlgorithm, *args, **kwargs)
# now the random Monte Carlo repetitions
for i, j, result in Parallel(n_jobs=n_jobs, verbose=int(verbose))(
delayed(delayed_measure)(i, j, argskwargs)
for i, argskwargs in tqdm(enumerate(list_of_args_kwargs), desc="ArgsKwargs", leave=False)
for j, rep in tqdm(enumerate(range(repetitions)), desc="Repetitions||", leave=False)
):
if mean:
results[i] += result
else:
results[j, i] = result
if mean:
results /= repetitions
if verbose:
for i, argskwargs in enumerate(list_of_args_kwargs):
args, kwargs = argskwargs
print(f"For args = {args}, kwargs = {kwargs} ({i}th), {'mean' if mean else 'vector of'} result = {results[i]}")
return results
def view1D_explore_parameters(measure, name,
CDAlgorithm,
tau=tau,
firstMean=mu_1,
secondMean=mu_2,
horizon=horizon,
repetitions=10,
verbose=False,
gaussian=False,
n_jobs=CPU_COUNT,
list_of_args_kwargs=tuple(),
argskwargs2str=None,
savefig=None,
):
explore = explore_parameters_parallel if n_jobs > 1 else explore_parameters
results = explore(measure,
CDAlgorithm,
tau=tau,
firstMean=mu_1,
secondMean=mu_2,
horizon=horizon,
repetitions=repetitions,
verbose=verbose,
gaussian=gaussian,
n_jobs=n_jobs,
list_of_args_kwargs=list_of_args_kwargs,
mean=False,
)
fig = plt.figure()
plt.boxplot(results)
plt.title(fr"{name} for {CDAlgorithm.__name__}, for $T={horizon}$, {'Gaussian' if gaussian else 'Bernoulli'} data, and $\mu_1={firstMean:.3g}$, $\mu_2={secondMean:.3g}$, $\tau={tau:.3g}$ and ${repetitions}$ repetitions")
plt.ylabel(f"{name}")
x_ticklabels = []
for argskwargs in list_of_args_kwargs:
args, kwargs = argskwargs
x_ticklabels.append(f"{args}, {kwargs}" if argskwargs2str is None else argskwargs2str(args, kwargs))
ax = plt.gca()
ax.set_xticklabels(x_ticklabels, rotation=80, verticalalignment="top")
show_and_save(savefig=savefig)
return fig
Monitored
¶list_of_args_kwargs_for_Monitored = tuple([
((), {'window_size': w, 'threshold_b': None}) # empty args, kwargs = {window_size=80, threshold_b=None}
for w in [5, 10, 20, 40, 80, 120, 160, 200, 250, 300, 350, 400, 500, 1000, 1500]
])
argskwargs2str_for_Monitored = lambda args, kwargs: fr"$w={kwargs['window_size']:.4g}$"
On a first Bernoulli problem, a very easy one (with a large gap of $\Delta=0.5$).
horizon = 100
firstMean = mu_1 = 0.25
secondMean = mu_2 = 0.75
gap = mu_2 - mu_1
tau = 0.5
%%time
explore_parameters(detection_delay,
Monitored,
tau=tau,
firstMean=mu_1,
secondMean=mu_2,
horizon=horizon,
repetitions=10,
verbose=True,
gaussian=False,
n_jobs=1,
list_of_args_kwargs=list_of_args_kwargs_for_Monitored,
)
%%time
explore_parameters_parallel(detection_delay,
Monitored,
tau=tau,
firstMean=mu_1,
secondMean=mu_2,
horizon=horizon,
repetitions=10,
verbose=True,
gaussian=False,
n_jobs=4,
list_of_args_kwargs=list_of_args_kwargs_for_Monitored,
)
%%time
_ = view1D_explore_parameters(detection_delay, "Detection delay",
Monitored,
tau=tau,
firstMean=mu_1,
secondMean=mu_2,
horizon=horizon,
repetitions=100,
gaussian=False,
n_jobs=4,
list_of_args_kwargs=list_of_args_kwargs_for_Monitored,
argskwargs2str=argskwargs2str_for_Monitored,
savefig=f"Detection_delay__Monitored__Bernoulli_T1000_N100__{len(list_of_args_kwargs_for_Monitored)}",
)
On the same problem, with $10000$ data instead of $1000$.
%%time
_ = view1D_explore_parameters(detection_delay, "Detection delay",
Monitored,
tau=tau,
firstMean=mu_1,
secondMean=mu_2,
horizon=10*horizon,
repetitions=100,
gaussian=False,
n_jobs=4,
list_of_args_kwargs=list_of_args_kwargs_for_Monitored,
argskwargs2str=argskwargs2str_for_Monitored,
savefig=f"Detection_delay__Monitored__Bernoulli_T10000_N100__{len(list_of_args_kwargs_for_Monitored)}",
)
On two Gaussian problems, one with a gap of $\Delta=0.5$ (easy) and a harder with a gap of $\Delta=0.1$. It is very intriguing that small difference in the gap can yield such large differences in the detection delay (or missed detection probability, as having a detection delay of $D=T-\tau$ means a missed detection!).
horizon = 10000
firstMean = mu_1 = -0.25
secondMean = mu_2 = 0.25
gap = mu_2 - mu_1
tau = 0.5
%%time
_ = view1D_explore_parameters(detection_delay, "Detection delay",
Monitored,
tau=tau,
firstMean=mu_1,
secondMean=mu_2,
horizon=horizon,
repetitions=100,
gaussian=True,
n_jobs=4,
list_of_args_kwargs=list_of_args_kwargs_for_Monitored,
argskwargs2str=argskwargs2str_for_Monitored,
savefig=f"Detection_delay__Monitored__Gaussian_T1000_N100__{len(list_of_args_kwargs_for_Monitored)}",
)
With a smaller gap, the problem gets harder, and can become impossible to solve (with such a small time horizon).
horizon = 10000
firstMean = mu_1 = -0.1
secondMean = mu_2 = 0.1
gap = mu_2 - mu_1
tau = 0.5
%%time
_ = view1D_explore_parameters(detection_delay, "Detection delay",
Monitored,
tau=tau,
firstMean=mu_1,
secondMean=mu_2,
horizon=horizon,
repetitions=100,
gaussian=True,
n_jobs=4,
list_of_args_kwargs=list_of_args_kwargs_for_Monitored,
argskwargs2str=argskwargs2str_for_Monitored,
savefig=f"Detection_delay__Monitored__Bernoulli_T1000_N100__{len(list_of_args_kwargs_for_Monitored)}_2",
)
horizon = 10000
firstMean = mu_1 = -0.05
secondMean = mu_2 = 0.05
gap = mu_2 - mu_1
tau = 0.5
list_of_args_kwargs_for_Monitored = tuple([
((), {'window_size': w, 'threshold_b': None}) # empty args, kwargs = {window_size=80, threshold_b=None}
for w in [5, 10, 20, 40, 80, 120, 160, 200, 250, 300, 350, 400, 500, 1000, 1500, 2000, 2500, 3000, 4000]
])
%%time
_ = view1D_explore_parameters(detection_delay, "Detection delay",
Monitored,
tau=tau,
firstMean=mu_1,
secondMean=mu_2,
horizon=horizon,
repetitions=100,
gaussian=True,
n_jobs=4,
list_of_args_kwargs=list_of_args_kwargs_for_Monitored,
argskwargs2str=argskwargs2str_for_Monitored,
savefig=f"Detection_delay__Monitored__Bernoulli_T1000_N100__{len(list_of_args_kwargs_for_Monitored)}_3",
)
Bernoulli GLR
¶list_of_args_kwargs_for_BernoulliGLR = tuple([
((), {'mult_threshold_h': h}) # empty args, kwargs = {threshold_h=None}
for h in [0.0001, 0.01, 0.1, 0.5, 0.9, 1, 2, 5, 10, 20, 50, 100, 1000, 10000]
])
def argskwargs2str_for_BernoulliGLR(args, kwargs):
h = kwargs['mult_threshold_h']
return fr"$h_0={h:.4g}$" if h is not None else "$h=$'auto'"
First, for a Bernoulli problem:
horizon = 1000
firstMean = mu_1 = 0.25
secondMean = mu_2 = 0.75
gap = mu_2 - mu_1
tau = 0.5
%%time
_ = view1D_explore_parameters(detection_delay, "Detection delay",
BernoulliGLR,
tau=tau,
firstMean=mu_1,
secondMean=mu_2,
horizon=horizon,
repetitions=10,
gaussian=False,
list_of_args_kwargs=list_of_args_kwargs_for_BernoulliGLR,
argskwargs2str=argskwargs2str_for_BernoulliGLR,
savefig=f"Detection_delay__BernoulliGLR__Bernoulli_T1000_N10__params{len(list_of_args_kwargs_for_BernoulliGLR)}",
)
%%time
_ = view1D_explore_parameters(false_alarm, "False alarm probability",
BernoulliGLR,
tau=tau,
firstMean=mu_1,
secondMean=mu_2,
horizon=horizon,
repetitions=10,
gaussian=False,
list_of_args_kwargs=list_of_args_kwargs_for_BernoulliGLR,
argskwargs2str=argskwargs2str_for_BernoulliGLR,
savefig=f"False_alarm__BernoulliGLR__Bernoulli_T1000_N10__params{len(list_of_args_kwargs_for_BernoulliGLR)}",
)
%%time
_ = view1D_explore_parameters(missed_detection, "Missed detection probability",
BernoulliGLR,
tau=tau,
firstMean=mu_1,
secondMean=mu_2,
horizon=horizon,
repetitions=10,
gaussian=False,
list_of_args_kwargs=list_of_args_kwargs_for_BernoulliGLR,
argskwargs2str=argskwargs2str_for_BernoulliGLR,
savefig=f"Missed_detection__BernoulliGLR__Bernoulli_T1000_N10__params{len(list_of_args_kwargs_for_BernoulliGLR)}",
)
Then, for a harder Bernoulli problem. Here the gap is $\frac{1}{\sqrt{2}}$ smaller, $\Delta=\frac{1}{2\sqrt{2}}$, so the complexity of the problem should be twice as hard (it scales as $\propto \frac{1}{\Delta^2}$).
horizon = 1000
firstMean = mu_1 = 0.25
gap = 1/2 / np.sqrt(2)
print(f"Gap = {gap}")
secondMean = mu_2 = mu_1 + gap
tau = 0.5
%%time
_ = view1D_explore_parameters(detection_delay, "Detection delay",
BernoulliGLR,
tau=tau,
firstMean=mu_1,
secondMean=mu_2,
horizon=horizon,
repetitions=10,
gaussian=False,
list_of_args_kwargs=list_of_args_kwargs_for_BernoulliGLR,
argskwargs2str=argskwargs2str_for_BernoulliGLR,
savefig=f"Detection_delay__BernoulliGLR__Bernoulli_T1000_N10__params{len(list_of_args_kwargs_for_BernoulliGLR)}_2",
)
%%time
_ = view1D_explore_parameters(false_alarm, "False alarm probability",
BernoulliGLR,
tau=tau,
firstMean=mu_1,
secondMean=mu_2,
horizon=horizon,
repetitions=10,
gaussian=False,
list_of_args_kwargs=list_of_args_kwargs_for_BernoulliGLR,
argskwargs2str=argskwargs2str_for_BernoulliGLR,
savefig=f"False_alarm__BernoulliGLR__Bernoulli_T1000_N10__params{len(list_of_args_kwargs_for_BernoulliGLR)}_2",
)
%%time
_ = view1D_explore_parameters(missed_detection, "Missed detection probability",
BernoulliGLR,
tau=tau,
firstMean=mu_1,
secondMean=mu_2,
horizon=horizon,
repetitions=10,
gaussian=False,
list_of_args_kwargs=list_of_args_kwargs_for_BernoulliGLR,
argskwargs2str=argskwargs2str_for_BernoulliGLR,
savefig=f"Missed_detection__BernoulliGLR__Bernoulli_T1000_N10__params{len(list_of_args_kwargs_for_BernoulliGLR)}_2",
)
Then, for a really harder Bernoulli problem. Here the gap is again smaller, $\Delta=\frac{1}{10}$, so the complexity of the problem should (again) much harder (it scales as $\propto \frac{1}{\Delta^2}$).
list_of_args_kwargs_for_BernoulliGLR = tuple([
((), {'mult_threshold_h': h}) # empty args, kwargs = {threshold_h=None}
for h in [0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1, 2]
])
horizon = 1000
firstMean = mu_1 = 0.25
gap = 0.1
print(f"Gap = {gap}")
secondMean = mu_2 = mu_1 + gap
tau = 0.5
%%time
_ = view1D_explore_parameters(detection_delay, "Detection delay",
BernoulliGLR,
tau=tau,
firstMean=mu_1,
secondMean=mu_2,
horizon=horizon,
repetitions=100,
gaussian=False,
list_of_args_kwargs=list_of_args_kwargs_for_BernoulliGLR,
argskwargs2str=argskwargs2str_for_BernoulliGLR,
savefig=f"Detection_delay__BernoulliGLR__Bernoulli_T1000_N100__params{len(list_of_args_kwargs_for_BernoulliGLR)}_4",
)
%%time
_ = view1D_explore_parameters(false_alarm, "False alarm probability",
BernoulliGLR,
tau=tau,
firstMean=mu_1,
secondMean=mu_2,
horizon=horizon,
repetitions=100,
gaussian=False,
list_of_args_kwargs=list_of_args_kwargs_for_BernoulliGLR,
argskwargs2str=argskwargs2str_for_BernoulliGLR,
savefig=f"False_alarm__BernoulliGLR__Bernoulli_T1000_N100__params{len(list_of_args_kwargs_for_BernoulliGLR)}_5",
)
%%time
_ = view1D_explore_parameters(missed_detection, "Missed detection probability",
BernoulliGLR,
tau=tau,
firstMean=mu_1,
secondMean=mu_2,
horizon=horizon,
repetitions=100,
gaussian=False,
list_of_args_kwargs=list_of_args_kwargs_for_BernoulliGLR,
argskwargs2str=argskwargs2str_for_BernoulliGLR,
savefig=f"Missed_detection__BernoulliGLR__Bernoulli_T1000_N100__params{len(list_of_args_kwargs_for_BernoulliGLR)}_5",
)
And now on Gaussian problems:
horizon = 1000
firstMean = mu_1 = -0.25
secondMean = mu_2 = 0.25
gap = mu_2 - mu_1
tau = 0.5
%%time
_ = view1D_explore_parameters(detection_delay, "Detection delay",
BernoulliGLR,
tau=tau,
firstMean=mu_1,
secondMean=mu_2,
horizon=horizon,
repetitions=10,
gaussian=True,
list_of_args_kwargs=list_of_args_kwargs_for_BernoulliGLR,
argskwargs2str=argskwargs2str_for_BernoulliGLR,
savefig=f"Detection_delay__BernoulliGLR__Gaussian_T1000_N10__params{len(list_of_args_kwargs_for_BernoulliGLR)}",
)
%%time
_ = view1D_explore_parameters(false_alarm, "False alarm probability",
BernoulliGLR,
tau=tau,
firstMean=mu_1,
secondMean=mu_2,
horizon=horizon,
repetitions=50,
gaussian=True,
list_of_args_kwargs=list_of_args_kwargs_for_BernoulliGLR,
argskwargs2str=argskwargs2str_for_BernoulliGLR,
savefig=f"False_alarm__BernoulliGLR__Gaussian_T1000_N10__params{len(list_of_args_kwargs_for_BernoulliGLR)}",
)
horizon = 1000
firstMean = mu_1 = -0.05
secondMean = mu_2 = 0.05
gap = mu_2 - mu_1
tau = 0.5
%%time
_ = view1D_explore_parameters(detection_delay, "Detection delay",
BernoulliGLR,
tau=tau,
firstMean=mu_1,
secondMean=mu_2,
horizon=horizon,
repetitions=10,
gaussian=True,
list_of_args_kwargs=list_of_args_kwargs_for_BernoulliGLR,
argskwargs2str=argskwargs2str_for_BernoulliGLR,
savefig=f"Detection_delay__BernoulliGLR__Gaussian_T1000_N10__params{len(list_of_args_kwargs_for_BernoulliGLR)}_2",
)
%%time
_ = view1D_explore_parameters(false_alarm, "False alarm probability",
BernoulliGLR,
tau=tau,
firstMean=mu_1,
secondMean=mu_2,
horizon=horizon,
repetitions=10,
gaussian=True,
list_of_args_kwargs=list_of_args_kwargs_for_BernoulliGLR,
argskwargs2str=argskwargs2str_for_BernoulliGLR,
savefig=f"False_alarm__BernoulliGLR__Gaussian_T1000_N10__params{len(list_of_args_kwargs_for_BernoulliGLR)}_2",
)
%%time
_ = view1D_explore_parameters(missed_detection, "Missed detection probability",
BernoulliGLR,
tau=tau,
firstMean=mu_1,
secondMean=mu_2,
horizon=horizon,
repetitions=10,
gaussian=True,
list_of_args_kwargs=list_of_args_kwargs_for_BernoulliGLR,
argskwargs2str=argskwargs2str_for_BernoulliGLR,
savefig=f"Missed_detection__BernoulliGLR__Gaussian_T1000_N10__params{len(list_of_args_kwargs_for_BernoulliGLR)}_2",
)
Gaussian GLR
¶list_of_args_kwargs_for_GaussianGLR = tuple([
((), {'mult_threshold_h': h}) # empty args, kwargs = {threshold_h=None}
for h in [0.0001, 0.01, 0.1, 0.5, 0.9, 1, 2, 5, 10, 20, 50, 100, 1000, 10000]
])
def argskwargs2str_for_GaussianGLR(args, kwargs):
h = kwargs['mult_threshold_h']
return fr"$h={h:.4g}$" if h is not None else "$h=$'auto'"
First, for a Bernoulli problem:
horizon = 1000
firstMean = mu_1 = 0.25
secondMean = mu_2 = 0.75
gap = mu_2 - mu_1
tau = 0.5
%%time
_ = view1D_explore_parameters(detection_delay, "Detection delay",
GaussianGLR,
tau=tau,
firstMean=mu_1,
secondMean=mu_2,
horizon=horizon,
repetitions=10,
gaussian=False,
list_of_args_kwargs=list_of_args_kwargs_for_GaussianGLR,
argskwargs2str=argskwargs2str_for_GaussianGLR,
savefig=f"Detection_delay__GaussianGLR__Bernoulli_T1000_N10__params{len(list_of_args_kwargs_for_GaussianGLR)}_2",
)
%%time
_ = view1D_explore_parameters(false_alarm, "False alarm",
GaussianGLR,
tau=tau,
firstMean=mu_1,
secondMean=mu_2,
horizon=horizon,
repetitions=10,
gaussian=False,
list_of_args_kwargs=list_of_args_kwargs_for_GaussianGLR,
argskwargs2str=argskwargs2str_for_GaussianGLR,
savefig=f"False_alarm__GaussianGLR__Bernoulli_T1000_N10__params{len(list_of_args_kwargs_for_GaussianGLR)}_2",
)
Then, for a Gaussian problem:
horizon = 1000
firstMean = mu_1 = -0.1
secondMean = mu_2 = 0.1
gap = mu_2 - mu_1
tau = 0.5
%%time
_ = view1D_explore_parameters(detection_delay, "Detection delay",
GaussianGLR,
tau=tau,
firstMean=mu_1,
secondMean=mu_2,
horizon=horizon,
repetitions=10,
gaussian=True,
list_of_args_kwargs=list_of_args_kwargs_for_GaussianGLR,
argskwargs2str=argskwargs2str_for_GaussianGLR,
savefig=f"Detection_delay__GaussianGLR__Gaussian_T1000_N10__params{len(list_of_args_kwargs_for_GaussianGLR)}_2",
)
And for a harder Gaussian problem:
horizon = 1000
firstMean = mu_1 = -0.01
secondMean = mu_2 = 0.01
gap = mu_2 - mu_1
tau = 0.5
%%time
_ = view1D_explore_parameters(detection_delay, "Detection delay",
GaussianGLR,
tau=tau,
firstMean=mu_1,
secondMean=mu_2,
horizon=horizon,
repetitions=10,
gaussian=True,
list_of_args_kwargs=list_of_args_kwargs_for_GaussianGLR,
argskwargs2str=argskwargs2str_for_GaussianGLR,
savefig=f"Detection_delay__GaussianGLR__Gaussian_T1000_N10__params{len(list_of_args_kwargs_for_GaussianGLR)}_3",
)
CUSUM
¶list_of_args_kwargs_for_CUSUM = tuple([
((), {'epsilon': epsilon, 'threshold_h': h, 'M': M}) # empty args, kwargs = {epsilon=0.5, threshold_h=None, M=100}
for epsilon in [0.05, 0.1, 0.5, 0.75, 0.9]
for h in [None, 0.01, 0.1, 1, 10]
for M in [50, 100, 150, 200, 500]
])
print(f"Exploring {len(list_of_args_kwargs_for_CUSUM)} different values of (h, epsilon, M) for CUSUM...")
def argskwargs2str_for_CUSUM(args, kwargs):
epsilon = kwargs['epsilon']
M = kwargs['M']
h = kwargs['threshold_h']
return fr"$\varepsilon={epsilon:.4g}$, $M={M}$, $h={h:.4g}$" if h is not None else fr"$\varepsilon={epsilon:.4g}$, $M={M}$, $h=$'auto'"
First, for a Bernoulli problem:
horizon = 1000
firstMean = mu_1 = 0.25
secondMean = mu_2 = 0.75
gap = mu_2 - mu_1
tau = 0.5
%%time
_ = view1D_explore_parameters(detection_delay, "Detection delay",
CUSUM,
tau=tau,
firstMean=mu_1,
secondMean=mu_2,
horizon=horizon,
repetitions=10,
gaussian=False,
list_of_args_kwargs=list_of_args_kwargs_for_CUSUM,
argskwargs2str=argskwargs2str_for_CUSUM,
savefig=f"Detection_delay__CUSUM__Bernoulli_T1000_N10__params{len(list_of_args_kwargs_for_CUSUM)}",
)
Now for the first problem ($T=1000$) and the PHT
algorithm.
%%time
_ = view1D_explore_parameters(detection_delay, "Detection delay",
PHT,
tau=tau,
firstMean=mu_1,
secondMean=mu_2,
horizon=horizon,
repetitions=10,
gaussian=False,
list_of_args_kwargs=list_of_args_kwargs_for_CUSUM,
argskwargs2str=argskwargs2str_for_CUSUM,
savefig=f"Detection_delay__PHT__Bernoulli_T1000_N10__params{len(list_of_args_kwargs_for_CUSUM)}",
)
Then, for a Gaussian problem with the same gap:
horizon = 1000
firstMean = mu_1 = -0.25
secondMean = mu_2 = 0.25
gap = mu_2 - mu_1
tau = 0.5
%%time
_ = view1D_explore_parameters(detection_delay, "Detection delay",
CUSUM,
tau=tau,
firstMean=mu_1,
secondMean=mu_2,
horizon=horizon,
repetitions=10,
gaussian=True,
list_of_args_kwargs=list_of_args_kwargs_for_CUSUM,
argskwargs2str=argskwargs2str_for_CUSUM,
savefig=f"Detection_delay__CUSUM__Gaussian_T1000_N10__params{len(list_of_args_kwargs_for_CUSUM)}",
)
Sub-Gaussian GLR
¶list_of_args_kwargs_for_SubGaussianGLR = tuple([
((), {'delta': delta, 'joint': joint, 'sigma': sigma}) # empty args, kwargs = {delta=0.01, joint=True}
for joint in [True, False]
for delta in [10, 1, 0.1, 0.05, 0.01, 0.005, 0.001, 0.0005, 0.0001]
for sigma in [10*SIGMA, SIGMA, 0.1*SIGMA]
])
def argskwargs2str_for_SubGaussianGLR(args, kwargs):
delta = kwargs['delta']
joint = kwargs['joint']
sigma = kwargs['sigma']
return fr"$\delta={delta:.4g}$, {'joint' if joint else 'disjoint'}, $\sigma={sigma:.4g}$"
First, for a Bernoulli problem:
horizon = 1000
firstMean = mu_1 = 0.25
secondMean = mu_2 = 0.75
gap = mu_2 - mu_1
tau = 0.5
%%time
_ = view1D_explore_parameters(detection_delay, "Detection delay",
SubGaussianGLR,
tau=tau,
firstMean=mu_1,
secondMean=mu_2,
horizon=horizon,
repetitions=10,
gaussian=False,
list_of_args_kwargs=list_of_args_kwargs_for_SubGaussianGLR,
argskwargs2str=argskwargs2str_for_SubGaussianGLR,
savefig=f"Detection_delay__SubGaussianGLR__Bernoulli_T1000_N10__params{len(list_of_args_kwargs_for_SubGaussianGLR)}",
)
%%time
_ = view1D_explore_parameters(false_alarm, "False alarm probability",
SubGaussianGLR,
tau=tau,
firstMean=mu_1,
secondMean=mu_2,
horizon=horizon,
repetitions=10,
gaussian=False,
list_of_args_kwargs=list_of_args_kwargs_for_SubGaussianGLR,
argskwargs2str=argskwargs2str_for_SubGaussianGLR,
savefig=f"False_alarm__SubGaussianGLR__Bernoulli_T1000_N10__params{len(list_of_args_kwargs_for_SubGaussianGLR)}",
)
Then, for a Gaussian problem:
horizon = 1000
firstMean = mu_1 = -0.1
secondMean = mu_2 = 0.1
gap = mu_2 - mu_1
tau = 0.5
%%time
_ = view1D_explore_parameters(detection_delay, "Detection delay",
SubGaussianGLR,
tau=tau,
firstMean=mu_1,
secondMean=mu_2,
horizon=horizon,
repetitions=10,
gaussian=True,
list_of_args_kwargs=list_of_args_kwargs_for_SubGaussianGLR,
argskwargs2str=argskwargs2str_for_SubGaussianGLR,
savefig=f"Detection_delay__SubGaussianGLR__Gaussian_T1000_N10__params{len(list_of_args_kwargs_for_SubGaussianGLR)}",
)
And for a harder Gaussian problem:
horizon = 1000
firstMean = mu_1 = -0.01
secondMean = mu_2 = 0.01
gap = mu_2 - mu_1
tau = 0.5
%%time
_ = view1D_explore_parameters(detection_delay, "Detection delay",
SubGaussianGLR,
tau=tau,
firstMean=mu_1,
secondMean=mu_2,
horizon=horizon,
repetitions=10,
gaussian=True,
list_of_args_kwargs=list_of_args_kwargs_for_SubGaussianGLR,
argskwargs2str=argskwargs2str_for_SubGaussianGLR,
savefig=f"Detection_delay__SubGaussianGLR__Gaussian_T1000_N10__params{len(list_of_args_kwargs_for_SubGaussianGLR)}_2",
)