First, be sure to be in the main folder, or to have installed SMPyBandits
, and import MAB
from Environment
package:
!pip install SMPyBandits watermark
%load_ext watermark
%watermark -v -m -p SMPyBandits -a "Lilian Besson"
from SMPyBandits.Environment import MAB
And also, import all the types of arms.
from SMPyBandits.Arms import *
# Check it exists:
Constant, Bernoulli, Gaussian, Exponential, ExponentialFromMean, Poisson, UniformArm, Gamma, GammaFromMean
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (12.4, 7)
This is the simpler example of arms : rewards are constant, and not randomly drawn from a distribution. Let consider an example with $K = 3$ arms.
M_C = MAB([Constant(mu) for mu in [0.1, 0.5, 0.9]])
The plotHistogram()
method draws samples from each arm, and plot a histogram of their repartition.
For constant arms, no need to take a lot of samples as they are constant.
_ = M_C.plotHistogram(10)
Then it's easy to create a Multi-Armed Bandit problem, instance of MAB
class, either from a list of Arm
objects:
M_B = MAB([Bernoulli(mu) for mu in [0.1, 0.5, 0.9]])
Or from a dictionary, with keys "arm_type"
and "params"
:
M_B = MAB({
"arm_type": Bernoulli,
"params": [0.1, 0.5, 0.9]
})
The plotHistogram()
method draws a lot of samples from each arm, and plot a histogram of their repartition:
_ = M_B.plotHistogram()
And with Gaussian arms, with a small variance of $\sigma^2 = 0.05$, for rewards truncated into $[0, 1]$:
M_G = MAB([Gaussian(mu, sigma=0.05) for mu in [0.1, 0.5, 0.9]])
The histogram clearly shows that low-variance Gaussian arms are easy to separate:
_ = M_G.plotHistogram(100000)
The truncation seems to change the means.
For instance, the first arm (in red) has a small mass on the special value $0$, so it probably reduces its mean.
Let's estimate it empirically, and then check with the closed form solution.
arm = Gaussian(0.1, sigma=0.05)
mean = arm.mean
estimated_mean = np.mean(arm.draw_nparray((10000000,)))
mean, estimated_mean
def relative_error(x, y):
return abs(x - y) / x
relative_error(mean, estimated_mean)
$\implies$ That's a relative difference of $0.4\%$, really negligible!
And for other values for $(\mu, \sigma)$:
arm = Gaussian(0.7, sigma=3)
mean = arm.mean
estimated_mean = np.mean(arm.draw_nparray((10000000,)))
mean, estimated_mean
relative_error(mean, estimated_mean)
$\implies$ That's a relative difference of $25\%$!
Clearly, this effect cannot be neglected!
Apparently, the closed form formula for the mean of a Gaussian arm $\mathcal{N}(\mu, \sigma)$, truncated to $[a,b]$ is : $$\mathbb{E} (X\mid a<X<b)=\mu +\sigma {\frac {\phi ({\frac {a-\mu }{\sigma }})-\phi ({\frac {b-\mu }{\sigma }})}{\Phi ({\frac {b-\mu }{\sigma }})-\Phi ({\frac {a-\mu }{\sigma }})}}\!=\mu +\sigma {\frac {\phi (\alpha )-\phi (\beta )}{\Phi (\beta )-\Phi (\alpha )}}.$$
Let's compute that.
import numpy as np
from scipy.special import erf
The fonction $$\phi(x) := \frac{1}{\sqrt{2 \pi}} \exp\left(- \frac{1}{2} x^2 \right).$$
def phi(xi):
r"""The :math:`\phi(\xi)` function, defined by:
.. math:: \phi(\xi) := \frac{1}{\sqrt{2 \pi}} \exp\left(- \frac12 \xi^2 \right)
It is the probability density function of the standard normal distribution, see https://en.wikipedia.org/wiki/Standard_normal_distribution.
"""
return np.exp(- 0.5 * xi**2) / np.sqrt(2. * np.pi)
The fonction $$\Phi(x) := \frac{1}{2} \left(1 + \mathrm{erf}\left( \frac{x}{\sqrt{2}} \right) \right).$$
def Phi(x):
r"""The :math:`\Phi(x)` function, defined by:
.. math:: \Phi(x) := \frac{1}{2} \left(1 + \mathrm{erf}\left( \frac{x}{\sqrt{2}} \right) \right).
It is the probability density function of the standard normal distribution, see https://en.wikipedia.org/wiki/Cumulative_distribution_function
"""
return (1. + erf(x / np.sqrt(2.))) / 2.
mu, sigma, mini, maxi = arm.mu, arm.sigma, arm.min, arm.max
mu, sigma, mini, maxi
other_mean = mu + sigma * (phi(mini) - phi(maxi)) / (Phi(maxi) - Phi(mini))
mean, estimated_mean, other_mean
Well, apparently, the theoretical formula is false for this case. It is not even bounded in $[0, 1]$!
Let's forget about this possible issue, and consider that the mean $\mu$ of a Gaussian arm $\mathcal{N}(\mu, \sigma)$ truncated to $[0,1]$ is indeed $\mu$.
But if the variance is larger, it can be very hard to differentiate between arms, and so MAB learning will be harder. With a big variance of $\sigma^2 = 0.5$, for rewards truncated into $[0, 1]$:
M_G = MAB([Gaussian(mu, sigma=0.10) for mu in [0.1, 0.5, 0.9]])
_ = M_G.plotHistogram(100000)
We see that due to the truncation, if mean of the Gaussian is too close to $0$ or $1$, then actual mean rewards is pushed to $0$ or $1$ (here the blue arm clearly has a mean higher than $0.9$).
And for larger variances, it is even stronger:
M_G = MAB([Gaussian(mu, sigma=0.25) for mu in [0.1, 0.5, 0.9]])
_ = M_G.plotHistogram()
We can do the same with (truncated) Exponential arms, and as a convenience I prefer to work with ExponentialFromMean
, to use the mean and not the $\lambda$ parameter to create the arm.
M_E = MAB({ "arm_type": ExponentialFromMean, "params": [0.1, 0.5, 0.9]})
_ = M_E.plotHistogram()
Arms with rewards uniform in $[0,1]$, are continuous versions of Bernoulli$(0.5)$. They can also be uniform in other intervals.
UniformArm(0, 1).lower_amplitude
UniformArm(0, 0.1).lower_amplitude
UniformArm(0.4, 0.5).lower_amplitude
UniformArm(0.8, 0.9).lower_amplitude
M_U = MAB([UniformArm(0, 1), UniformArm(0, 0.1), UniformArm(0.4, 0.5), UniformArm(0.8, 0.9)])
_ = M_U.plotHistogram(100000)
Of course, everything work similarly if rewards are not in $[0, 1]$ but in any interval $[a, b]$.
Note that all my algorithms assume $a = \text{lower} = 0$ and $b = 1$ (and use $\text{amplitude} = b - a$ instead of $b$). They just need to be specified if we stop using the default choice $[0, 1]$.
For example, Gaussian arms can be truncated into $[-10, 10]$ instead of $[0, 1]$. Let define some Gaussian arms, with means $-5, 0, 5$ and a variance of $\sigma^2 = 2$.
M_G = MAB([Gaussian(mu, sigma=2, mini=-10, maxi=10) for mu in [-5, 0, 5]])
_ = M_G.plotHistogram(100000)
M_G = MAB([Gaussian(mu, sigma=0.1, mini=-10, maxi=10) for mu in [-5, 0, 5]])
_ = M_G.plotHistogram()
We can do the same with (truncated) Gamma arms, and as a convenience I prefer to work with GammaFromMean
, to use the mean and not the $k$ shape parameter to create the arm.
The scale $\theta$ is fixed to $1$ by default, and here the rewards will be in $[0, 10]$.
M_Gamma = MAB([GammaFromMean(shape, scale=1, mini=0, maxi=10) for shape in [1, 2, 3, 4, 5]])
_ = M_Gamma.plotHistogram(100000)
As for Gaussian arms, the truncation is strongly changing the means of the arm rewards. Here the arm with mean parameter $5$ has an empirical mean close to $10$ due to truncation.
Let try with non-truncated rewards.
M_G = MAB([Gaussian(mu, sigma=3, mini=float('-inf'), maxi=float('+inf')) for mu in [-10, 0, 10]])
_ = M_G.plotHistogram(100000)
And with non-truncated Gamma arms ?
M_Gamma = MAB([GammaFromMean(shape, scale=1, mini=float('-inf'), maxi=float('+inf')) for shape in [1, 2, 3, 4, 5]])
_ = M_Gamma.plotHistogram(100000)
M_Gamma = MAB([GammaFromMean(shape, scale=1, mini=float('-inf'), maxi=float('+inf')) for shape in [10, 20, 30, 40, 50]])
_ = M_Gamma.plotHistogram(1000000)
This small notebook demonstrated how to define arms and Multi-Armed Bandit problems in my framework, SMPyBandits.