Table of Contents¶

1  Easily creating MAB problems

1.1  Constant arms

1.2  Bernoulli arms

1.3  Gaussian arms

1.3.1  Wrong means for Gaussian arms ?

1.3.2  Closed form formula

1.3.3  With a larger variance ?

1.4  Exponential arms

1.5  Uniform arms

1.6  Arms with rewards outside of [0,1]

1.7  Gamma arms

1.8  Non-truncated Gaussian and Gamma arms

1.9  Conclusion


Easily creating MAB problems¶

First, be sure to be in the main folder, and import MAB from Environment package:

In [1]:
from sys import path
path.insert(0, '..')
In [3]:
from Environment import MAB

And also, import all the types of arms.

In [6]:
from Arms import *
# Check it exists:
Constant, Bernoulli, Gaussian, Exponential, ExponentialFromMean, Poisson, Uniform, Gamma, GammaFromMean
Out[6]:
(Arms.Constant.Constant,
 Arms.Bernoulli.Bernoulli,
 Arms.Gaussian.Gaussian,
 Arms.Exponential.Exponential,
 Arms.Exponential.ExponentialFromMean,
 Arms.Poisson.Poisson,
 Arms.Uniform.Uniform,
 Arms.Gamma.Gamma,
 Arms.Gamma.GammaFromMean)

Constant arms¶

This is the simpler example of arms : rewards are constant, and not randomly drawn from a distribution. Let consider an example with \(K = 3\) arms.

In [7]:
M_C = MAB([Constant(mu) for mu in [0.1, 0.5, 0.9]])
Creating a new MAB problem ...
  Taking arms of this MAB problem from a list of arms 'configuration' = [Constant(0.1), Constant(0.5), Constant(0.9)] ...
 - with 'arms' = [Constant(0.1), Constant(0.5), Constant(0.9)]
 - with 'means' = [ 0.1  0.5  0.9]
 - with 'nbArms' = 3
 - with 'maxArm' = 0.9
 - with 'minArm' = 0.1

This MAB problem has:
 - a [Lai & Robbins] complexity constant C(mu) = 2 ...
 - a Optimal Arm Identification factor H_OI(mu) = 26.67% ...
 - with 'arms' represented as: $[Constant(0.1), Constant(0.5), Constant(0.9)^*]$

The plotHistogram() method draws samples from each arm, and plot a histogram of their repartition. For constant arms, no need to take a lot of samples as they are constant.

In [9]:
M_C.plotHistogram(10)
../_images/notebooks_Easily_creating_MAB_problems_9_0.png

Bernoulli arms¶

Then it’s easy to create a Multi-Armed Bandit problem, instance of MAB class, either from a list of Arm objects:

In [10]:
M_B = MAB([Bernoulli(mu) for mu in [0.1, 0.5, 0.9]])
Creating a new MAB problem ...
  Taking arms of this MAB problem from a list of arms 'configuration' = [B(0.1), B(0.5), B(0.9)] ...
 - with 'arms' = [B(0.1), B(0.5), B(0.9)]
 - with 'means' = [ 0.1  0.5  0.9]
 - with 'nbArms' = 3
 - with 'maxArm' = 0.9
 - with 'minArm' = 0.1

This MAB problem has:
 - a [Lai & Robbins] complexity constant C(mu) = 1.24 ...
 - a Optimal Arm Identification factor H_OI(mu) = 26.67% ...
 - with 'arms' represented as: $[B(0.1), B(0.5), B(0.9)^*]$

Or from a dictionary, with keys "arm_type" and "params":

In [11]:
M_B = MAB({
    "arm_type": Bernoulli,
    "params": [0.1, 0.5, 0.9]
})
Creating a new MAB problem ...
  Reading arms of this MAB problem from a dictionnary 'configuration' = {'arm_type': <class 'Arms.Bernoulli.Bernoulli'>, 'params': [0.1, 0.5, 0.9]} ...
 - with 'arm_type' = <class 'Arms.Bernoulli.Bernoulli'>
 - with 'params' = [0.1, 0.5, 0.9]
 - with 'arms' = [B(0.1), B(0.5), B(0.9)]
 - with 'means' = [ 0.1  0.5  0.9]
 - with 'nbArms' = 3
 - with 'maxArm' = 0.9
 - with 'minArm' = 0.1

This MAB problem has:
 - a [Lai & Robbins] complexity constant C(mu) = 1.24 ...
 - a Optimal Arm Identification factor H_OI(mu) = 26.67% ...
 - with 'arms' represented as: $[B(0.1), B(0.5), B(0.9)^*]$

The plotHistogram() method draws a lot of samples from each arm, and plot a histogram of their repartition:

In [12]:
M_B.plotHistogram()
../_images/notebooks_Easily_creating_MAB_problems_15_0.png

Gaussian arms¶

And with Gaussian arms, with a small variance of \(\sigma^2 = 0.05\), for rewards truncated into \([0, 1]\):

In [13]:
M_G = MAB([Gaussian(mu, sigma=0.05) for mu in [0.1, 0.5, 0.9]])
Creating a new MAB problem ...
  Taking arms of this MAB problem from a list of arms 'configuration' = [G(0.1, 0.05), G(0.5, 0.05), G(0.9, 0.05)] ...
 - with 'arms' = [G(0.1, 0.05), G(0.5, 0.05), G(0.9, 0.05)]
 - with 'means' = [ 0.1  0.5  0.9]
 - with 'nbArms' = 3
 - with 'maxArm' = 0.9
 - with 'minArm' = 0.1

This MAB problem has:
 - a [Lai & Robbins] complexity constant C(mu) = 0.375 ...
 - a Optimal Arm Identification factor H_OI(mu) = 26.67% ...
 - with 'arms' represented as: $[G(0.1, 0.05), G(0.5, 0.05), G(0.9, 0.05)^*]$

The histogram clearly shows that low-variance Gaussian arms are easy to separate:

In [14]:
M_G.plotHistogram(100000)
../_images/notebooks_Easily_creating_MAB_problems_19_0.png

Wrong means for Gaussian arms ?¶

The truncation seems to change the means.

For instance, the first arm (in red) has a small mass on the special value \(0\), so it probably reduces its mean.

Let’s estimate it empirically, and then check with the closed form solution.

In [31]:
arm = Gaussian(0.1, sigma=0.05)
In [32]:
mean = arm.mean
estimated_mean = np.mean(arm.draw_nparray((10000000,)))
In [33]:
mean, estimated_mean
Out[33]:
(0.1, 0.10043270258959709)
In [34]:
def relative_error(x, y):
    return abs(x - y) / x

relative_error(mean, estimated_mean)
Out[34]:
0.0043270258959708652

\(\implies\) That’s a relative difference of \(0.4\%\), really negligible!

And for other values for \((\mu, \sigma)\):

In [19]:
arm = Gaussian(0.7, sigma=3)
In [20]:
mean = arm.mean
estimated_mean = np.mean(arm.draw_nparray((10000000,)))
In [21]:
mean, estimated_mean
Out[21]:
(0.7, 0.52655266492913366)
In [23]:
relative_error(mean, estimated_mean)
Out[23]:
0.24778190724409471

\(\implies\) That’s a relative difference of \(25\%\)!

Clearly, this effect cannot be neglected!

Closed form formula¶

Apparently, the closed form formula for the mean of a Gaussian arm \(\mathcal{N}(\mu, \sigma)\), truncated to :math:`[a,b]` is :

\[\mathbb{E} (X\mid a<X<b)=\mu +\sigma {\frac {\phi ({\frac {a-\mu }{\sigma }})-\phi ({\frac {b-\mu }{\sigma }})}{\Phi ({\frac {b-\mu }{\sigma }})-\Phi ({\frac {a-\mu }{\sigma }})}}\!=\mu +\sigma {\frac {\phi (\alpha )-\phi (\beta )}{\Phi (\beta )-\Phi (\alpha )}}.\]

Let’s compute that.

In [24]:
import numpy as np
from scipy.special import erf

The fonction

\[\phi(x) := \frac{1}{\sqrt{2 \pi}} \exp\left(- \frac{1}{2} x^2 \right).\]
In [27]:
def phi(xi):
    r"""The :math:`\phi(\xi)` function, defined by:

    .. math:: \phi(\xi) := \frac{1}{\sqrt{2 \pi}} \exp\left(- \frac12 \xi^2 \right)

    It is the probability density function of the standard normal distribution, see https://en.wikipedia.org/wiki/Standard_normal_distribution.
    """
    return np.exp(- 0.5 * xi**2) / np.sqrt(2. * np.pi)

The fonction

\[\Phi(x) := \frac{1}{2} \left(1 + \mathrm{erf}\left( \frac{x}{\sqrt{2}} \right) \right).\]
In [26]:
def Phi(x):
    r"""The :math:`\Phi(x)` function, defined by:

    .. math:: \Phi(x) := \frac{1}{2} \left(1 + \mathrm{erf}\left( \frac{x}{\sqrt{2}} \right) \right).

    It is the probability density function of the standard normal distribution, see https://en.wikipedia.org/wiki/Cumulative_distribution_function
    """
    return (1. + erf(x / np.sqrt(2.))) / 2.
In [28]:
mu, sigma, mini, maxi = arm.mu, arm.sigma, arm.min, arm.max
mu, sigma, mini, maxi
Out[28]:
(0.7, 3, 0, 1)
In [29]:
other_mean = mu + sigma * (phi(mini) - phi(maxi)) / (Phi(maxi) - Phi(mini))
In [30]:
mean, estimated_mean, other_mean
Out[30]:
(0.7, 0.52655266492913366, 2.0795866878592797)

Well, apparently, the theoretical formula is false for this case. It is not even bounded in \([0, 1]\)!

Let’s forget about this possible issue, and consider that the mean \(\mu\) of a Gaussian arm \(\mathcal{N}(\mu, \sigma)\) truncated to \([0,1]\) is indeed \(\mu\).

With a larger variance ?¶

But if the variance is larger, it can be very hard to differentiate between arms, and so MAB learning will be harder. With a big variance of \(\sigma^2 = 0.5\), for rewards truncated into \([0, 1]\):

In [11]:
M_G = MAB([Gaussian(mu, sigma=0.10) for mu in [0.1, 0.5, 0.9]])
M_G.plotHistogram(100000)
Creating a new MAB problem ...
  Taking arms of this MAB problem from a list of arms 'configuration' = [G(0.1, 0.1), G(0.5, 0.1), G(0.9, 0.1)] ...
 - with 'arms' = [G(0.1, 0.1), G(0.5, 0.1), G(0.9, 0.1)]
 - with 'nbArms' = 3
 - with 'maxArm' = 0.9
 - with 'minArm' = 0.1

This MAB problem has:
 - a [Lai & Robbins] complexity constant C(mu) = 0.75 ...
 - a Optimal Arm Identification factor H_OI(mu) = 26.67% ...
../_images/notebooks_Easily_creating_MAB_problems_43_1.png

We see that due to the truncation, if mean of the Gaussian is too close to \(0\) or \(1\), then actual mean rewards is pushed to \(0\) or \(1\) (here the blue arm clearly has a mean higher than \(0.9\)).

And for larger variances, it is even stronger:

In [12]:
M_G = MAB([Gaussian(mu, sigma=0.25) for mu in [0.1, 0.5, 0.9]])
M_G.plotHistogram()
Creating a new MAB problem ...
  Taking arms of this MAB problem from a list of arms 'configuration' = [G(0.1, 0.25), G(0.5, 0.25), G(0.9, 0.25)] ...
 - with 'arms' = [G(0.1, 0.25), G(0.5, 0.25), G(0.9, 0.25)]
 - with 'nbArms' = 3
 - with 'maxArm' = 0.9
 - with 'minArm' = 0.1

This MAB problem has:
 - a [Lai & Robbins] complexity constant C(mu) = 1.87 ...
 - a Optimal Arm Identification factor H_OI(mu) = 26.67% ...
../_images/notebooks_Easily_creating_MAB_problems_45_1.png

Exponential arms¶

We can do the same with (truncated) Exponential arms, and as a convenience I prefer to work with ExponentialFromMean, to use the mean and not the \(\lambda\) parameter to create the arm.

In [13]:
M_E = MAB({ "arm_type": ExponentialFromMean, "params": [0.1, 0.5, 0.9]})
Creating a new MAB problem ...
  Reading arms of this MAB problem from a dictionnary 'configuration' = {'arm_type': <class 'Arms.Exponential.ExponentialFromMean'>, 'params': [0.1, 0.5, 0.9]} ...
 - with 'arm_type' = <class 'Arms.Exponential.ExponentialFromMean'>
 - with 'params' = [0.1, 0.5, 0.9]
 - with 'arms' = [Exp(10, 1), Exp(1.59, 1), Exp(0.215, 1)]
 - with 'nbArms' = 3
 - with 'maxArm' = 0.900000003233
 - with 'minArm' = 0.100000000055

This MAB problem has:
 - a [Lai & Robbins] complexity constant C(mu) = 3.4 ...
 - a Optimal Arm Identification factor H_OI(mu) = 26.67% ...
In [14]:
M_E.plotHistogram()
../_images/notebooks_Easily_creating_MAB_problems_48_0.png

Uniform arms¶

Arms with rewards uniform in \([0,1]\), are continuous versions of Bernoulli\((0.5)\). They can also be uniform in other intervals.

In [15]:
Uniform(0, 1).lower_amplitude
Uniform(0, 0.1).lower_amplitude
Uniform(0.4, 0.5).lower_amplitude
Uniform(0.8, 0.9).lower_amplitude
Out[15]:
(0, 1)
Out[15]:
(0, 0.1)
Out[15]:
(0.4, 0.09999999999999998)
Out[15]:
(0.8, 0.09999999999999998)
In [16]:
M_U = MAB([Uniform(0, 1), Uniform(0, 0.1), Uniform(0.4, 0.5), Uniform(0.8, 0.9)])
Creating a new MAB problem ...
  Taking arms of this MAB problem from a list of arms 'configuration' = [U(0, 1), U(0, 0.1), U(0.4, 0.1), U(0.8, 0.1)] ...
 - with 'arms' = [U(0, 1), U(0, 0.1), U(0.4, 0.1), U(0.8, 0.1)]
 - with 'nbArms' = 4
 - with 'maxArm' = 0.85
 - with 'minArm' = 0.05

This MAB problem has:
 - a [Lai & Robbins] complexity constant C(mu) = 2.47 ...
 - a Optimal Arm Identification factor H_OI(mu) = 36.25% ...
In [17]:
M_U.plotHistogram(100000)
../_images/notebooks_Easily_creating_MAB_problems_52_0.png

Arms with rewards outside of \([0, 1]\)¶

Of course, everything work similarly if rewards are not in \([0, 1]\) but in any interval \([a, b]\).

Note that all my algorithms assume \(a = \text{lower} = 0\) and \(b = 1\) (and use \(\text{amplitude} = b - a\) instead of \(b\)). They just need to be specified if we stop using the default choice \([0, 1]\).

For example, Gaussian arms can be truncated into \([-10, 10]\) instead of \([0, 1]\). Let define some Gaussian arms, with means \(-5, 0, 5\) and a variance of \(\sigma^2 = 2\).

In [18]:
M_G = MAB([Gaussian(mu, sigma=2, mini=-10, maxi=10) for mu in [-5, 0, 5]])
Creating a new MAB problem ...
  Taking arms of this MAB problem from a list of arms 'configuration' = [G(-5, 2), G(0, 2), G(5, 2)] ...
 - with 'arms' = [G(-5, 2), G(0, 2), G(5, 2)]
 - with 'nbArms' = 3
 - with 'maxArm' = 5
 - with 'minArm' = -5

This MAB problem has:
 - a [Lai & Robbins] complexity constant C(mu) = 1.2 ...
 - a Optimal Arm Identification factor H_OI(mu) = 16.67% ...
In [19]:
M_G.plotHistogram(100000)
../_images/notebooks_Easily_creating_MAB_problems_55_0.png
In [20]:
M_G = MAB([Gaussian(mu, sigma=0.1, mini=-10, maxi=10) for mu in [-5, 0, 5]])
Creating a new MAB problem ...
  Taking arms of this MAB problem from a list of arms 'configuration' = [G(-5, 0.1), G(0, 0.1), G(5, 0.1)] ...
 - with 'arms' = [G(-5, 0.1), G(0, 0.1), G(5, 0.1)]
 - with 'nbArms' = 3
 - with 'maxArm' = 5
 - with 'minArm' = -5

This MAB problem has:
 - a [Lai & Robbins] complexity constant C(mu) = 0.06 ...
 - a Optimal Arm Identification factor H_OI(mu) = 16.67% ...
In [21]:
M_G.plotHistogram()
../_images/notebooks_Easily_creating_MAB_problems_57_0.png

Gamma arms¶

We can do the same with (truncated) Gamma arms, and as a convenience I prefer to work with GammaFromMean, to use the mean and not the \(k\) shape parameter to create the arm. The scale \(\theta\) is fixed to \(1\) by default, and here the rewards will be in \([0, 10]\).

In [31]:
M_Gamma = MAB([GammaFromMean(shape, scale=1, mini=0, maxi=10) for shape in [1, 2, 3, 4, 5]])
Creating a new MAB problem ...
  Taking arms of this MAB problem from a list of arms 'configuration' = [\Gamma(1, 1), \Gamma(2, 1), \Gamma(3, 1), \Gamma(4, 1), \Gamma(5, 1)] ...
 - with 'arms' = [\Gamma(1, 1), \Gamma(2, 1), \Gamma(3, 1), \Gamma(4, 1), \Gamma(5, 1)]
 - with 'nbArms' = 5
 - with 'maxArm' = 5.0
 - with 'minArm' = 1.0

This MAB problem has:
 - a [Lai & Robbins] complexity constant C(mu) = 75.7 ...
 - a Optimal Arm Identification factor H_OI(mu) = 60.00% ...
In [32]:
M_Gamma.plotHistogram(100000)
../_images/notebooks_Easily_creating_MAB_problems_60_0.png

As for Gaussian arms, the truncation is strongly changing the means of the arm rewards. Here the arm with mean parameter \(5\) has an empirical mean close to \(10\) due to truncation.

Non-truncated Gaussian and Gamma arms¶

Let try with non-truncated rewards.

In [28]:
M_G = MAB([Gaussian(mu, sigma=3, mini=float('-inf'), maxi=float('+inf')) for mu in [-10, 0, 10]])
Creating a new MAB problem ...
  Taking arms of this MAB problem from a list of arms 'configuration' = [G(-10, 3), G(0, 3), G(10, 3)] ...
 - with 'arms' = [G(-10, 3), G(0, 3), G(10, 3)]
 - with 'nbArms' = 3
 - with 'maxArm' = 10
 - with 'minArm' = -10

This MAB problem has:
 - a [Lai & Robbins] complexity constant C(mu) = 0.9 ...
 - a Optimal Arm Identification factor H_OI(mu) = 66.67% ...
In [29]:
M_G.plotHistogram(100000)
../_images/notebooks_Easily_creating_MAB_problems_64_0.png

And with non-truncated Gamma arms ?

In [36]:
M_Gamma = MAB([GammaFromMean(shape, scale=1, mini=float('-inf'), maxi=float('+inf')) for shape in [1, 2, 3, 4, 5]])
M_Gamma.plotHistogram(100000)
Creating a new MAB problem ...
  Taking arms of this MAB problem from a list of arms 'configuration' = [\Gamma(1, 1), \Gamma(2, 1), \Gamma(3, 1), \Gamma(4, 1), \Gamma(5, 1)] ...
 - with 'arms' = [\Gamma(1, 1), \Gamma(2, 1), \Gamma(3, 1), \Gamma(4, 1), \Gamma(5, 1)]
 - with 'nbArms' = 5
 - with 'maxArm' = 5.0
 - with 'minArm' = 1.0

This MAB problem has:
 - a [Lai & Robbins] complexity constant C(mu) = 75.7 ...
 - a Optimal Arm Identification factor H_OI(mu) = 80.00% ...
../_images/notebooks_Easily_creating_MAB_problems_66_1.png
In [38]:
M_Gamma = MAB([GammaFromMean(shape, scale=1, mini=float('-inf'), maxi=float('+inf')) for shape in [10, 20, 30, 40, 50]])
M_Gamma.plotHistogram(1000000)
Creating a new MAB problem ...
  Taking arms of this MAB problem from a list of arms 'configuration' = [\Gamma(10, 1), \Gamma(20, 1), \Gamma(30, 1), \Gamma(40, 1), \Gamma(50, 1)] ...
 - with 'arms' = [\Gamma(10, 1), \Gamma(20, 1), \Gamma(30, 1), \Gamma(40, 1), \Gamma(50, 1)]
 - with 'nbArms' = 5
 - with 'maxArm' = 50.0
 - with 'minArm' = 10.0

This MAB problem has:
 - a [Lai & Robbins] complexity constant C(mu) = 757 ...
 - a Optimal Arm Identification factor H_OI(mu) = 80.00% ...
../_images/notebooks_Easily_creating_MAB_problems_67_1.png

Conclusion¶

This small notebook demonstrated how to define arms and Multi-Armed Bandit problems in my framework.