This small Jupyter notebook presents an experiment, in the context of Multi-Armed Bandit problems (MAB).
I am trying to answer a simple question:
"Can we use generic unsupervised learning algorithm, like Kernel Density estimation or Ridge Regression, instead of MAB algorithms like UCB or Thompson Sampling ?
I will use my SMPyBandits library, for which a complete documentation is available, here at https://smpybandits.github.io/, and the scikit-learn package.
First, be sure to be in the main folder, or to have installed SMPyBandits
, and import the MAB
class from the Environment
package:
import numpy as np
!pip install SMPyBandits watermark
%load_ext watermark
%watermark -v -m -p SMPyBandits -a "Lilian Besson"
from SMPyBandits.Environment import MAB
And also, import the Gaussian
class to create Gaussian-distributed arms.
from SMPyBandits.Arms import Gaussian
# Just improving the ?? in Jupyter. Thanks to https://nbviewer.jupyter.org/gist/minrk/7715212
from __future__ import print_function
from IPython.core import page
def myprint(s):
try:
print(s['text/plain'])
except (KeyError, TypeError):
print(s)
page.page = myprint
Gaussian?
Let create a simple bandit problem, with 3 arms, and visualize an histogram showing the repartition of rewards.
means = [0.45, 0.5, 0.55]
M = MAB(Gaussian(mu, sigma=0.2) for mu in means)
_ = M.plotHistogram(horizon=1000000)
As we can see, the rewards of the different arms are close. It won't be easy to distinguish them.
Then we can generate some draws, from all arms, from time $t=1$ to $t=T_0$, for let say $T_0 = 1000$ :
T_0 = 1000
shape = (T_0,)
draws = np.array([ b.draw_nparray(shape) for b in M.arms ])
draws
The empirical means of each arm can be estimated, quite easily, and could be used to make all the decisions from $t \geq T_0 + 1$.
empirical_means = np.mean(draws, axis=1)
empirical_means
Clearly, the last arm is the best. And the empirical means $\hat{\mu}_k(t)$ for $k=1,\dots,K$, are really close to the true one, as $T_0 = 1000$ is quite large.
def relative_error(x, y):
return np.abs(x - y) / x
relative_error(means, empirical_means)
That's less than $3\%$ error, it's already quite good!
Conclusion : If we have "enough" samples, and the distribution are not too close, there is no need to do any learning: just pick the arm with highest mean, from now on, and you will be good!
best_arm_estimated = np.argmax(empirical_means)
best_arm = np.argmax(means)
assert best_arm_estimated == best_arm, "Error: the best arm is wrongly estimated, even after {} samples.".format(T_0)
But maybe $T_0 = 1000$ was really too large...
Let assume that the initial data was obtained from an algorithm which starts playing by exploring every arm, uniformly at random, until it gets "enough" data.
What if we want to use the same technique on very few data? Let see with $T_0 = 10$, if the empirical means are still as close to the true ones.
np.random.seed(10000) # for reproducibility of the error best_arm_estimated = 1
T_0 = 10
draws = np.array([ b.draw_nparray((T_0, )) for b in M.arms ])
empirical_means = np.mean(draws, axis=1)
empirical_means
relative_error(means, empirical_means)
best_arm_estimated = np.argmax(empirical_means)
best_arm_estimated
assert best_arm_estimated == best_arm, "Error: the best arm is wrongly estimated, even after {} samples.".format(T_0)
Clearly, if there is not enough sample, the empirical mean estimator can be wrong. It will not always be wrong with so few samples, but it can.
We should use the initial data for more than just getting empirical means.
Let use a simple Unsupervised Learning algorithm, implemented in the scikit-learn (sklearn
) package: 1D Kernel Density estimation.
from sklearn.neighbors.kde import KernelDensity
First, we need to create a model.
Here we assume to know that the arms are Gaussian, so fitting a Gaussian kernel will probably work the best.
The bandwidth
parameter should be of the order of the variances $\sigma_k$ of each arm (we used $0.2$).
kde = KernelDensity(kernel='gaussian', bandwidth=0.2)
kde
Then, we will feed it the initial data, obtained from the initial phase of uniform exploration, from $t = 1, \dots, T_0$.
draws
draws.shape
We need to use the transpose of this array, as the data should have shape (n_samples, n_features)
, i.e., of shape (10, 3)
here.
kde.fit?
kde.fit(draws.T)
The score_samples(X)
method can be used to evaluate the density on sample data (i.e., the likelihood of each observation).
kde.score_samples(draws.T)
For instance, based on the means $[0.45, 0.5, 0.55]$, the sample $[10, -10, 0]$ should be very unlikely, while $[0.4, 0.5, 0.6]$ will be more likely. And the vector of empirical means is a very likely observation as well.
kde.score(np.array([10, -10, 0]).reshape(1, -1))
kde.score(np.array([0.4, 0.5, 0.6]).reshape(1, -1))
kde.score(empirical_means.reshape(1, -1))
Now that we have a model of Kernel Density estimation, we can use it to generate some random samples.
kde.sample?
Basically, that means we can use this model to predict what the next output of the 3 arms (constituting the Gaussian problem) will be.
Let see this with one example.
np.random.seed(1)
one_sample = kde.sample()
one_sample
one_draw = M.draw_each()
one_draw
Of course, the next random rewards from the arms have no reason to be close to predicted ones...
But maybe we can use the prediction to choose the arm with highest sample? And hopefully this will be the best arm, at least in average!
best_arm_sampled = np.argmax(one_sample)
best_arm_sampled
assert best_arm_sampled == best_arm, "Error: the best arm is wrongly estimated from a random sample, even after {} observations.".format(T_0)
We can also implement manually a simple 1D Unsupervised Learning algorithm, which fits a Gaussian kernel (i.e., a distribution $\mathcal{N}(\mu,\sigma)$) on the 1D data, for each arm.
Let start with a base class, showing the signature any Unsupervised Learning should have to be used in our policy (defined below).
# --- Unsupervised fitting models
class FittingModel(object):
""" Base class for any fitting model"""
def __init__(self, *args, **kwargs):
""" Nothing to do here."""
pass
def __repr__(self):
return str(self)
def fit(self, data):
""" Nothing to do here."""
return self
def sample(self, shape=1):
""" Always 0., for instance."""
return 0.
def score_samples(self, data):
""" Always 1., for instance."""
return 1.
def score(self, data):
""" Log likelihood of the point (or the vector of data), under the current Gaussian model."""
return np.log(np.sum(self.score_samples(data)))
And then, the SimpleGaussianKernel
class, using scipy.stats.norm.pdf
to evaluate the log-probability of an observation.
import scipy.stats as st
class SimpleGaussianKernel(FittingModel):
""" Basic Unsupervised Learning algorithm, which simply fits a 1D Gaussian on some 1D data."""
def __init__(self, loc=0., scale=1., *args, **kwargs):
r""" Starts with :math:`\mathcal{N}(0, 1)`, by default."""
self.loc = float(loc)
self.scale = float(scale)
def __str__(self):
return "N({:.3g}, {:.3g})".format(self.loc, self.scale)
def fit(self, data):
""" Use the mean and variance from the 1D vector data (of shape `n_samples` or `(n_samples, 1)`)."""
self.loc, self.scale = np.mean(data), np.std(data)
return self
def sample(self, shape=1):
""" Return one or more sample, from the current Gaussian model."""
if shape == 1:
return np.random.normal(self.loc, self.scale)
else:
return np.random.normal(self.loc, self.scale, shape)
def score_samples(self, data):
""" Likelihood of the point (or the vector of data), under the current Gaussian model, component-wise."""
return st.norm.pdf(data, loc=self.loc, scale=np.sqrt(self.scale))
Based on that idea, we can implement a policy, following the common API of all the policies of my framework.
KernelDensity
estimators.fit_every
steps, e.g., $100$), retrain all the Unsupervised Learning algorithms :A more robust (and so more correct) variant could be to use a bunch of samples, and use their mean to give $s_k(t)$ :
In code, this gives the following:
class UnsupervisedLearning(object):
""" Generic policy using an Unsupervised Learning algorithm, from scikit-learn.
- Warning: still highly experimental!
"""
def __init__(self, nbArms, estimator=KernelDensity,
T_0=10, fit_every=100, meanOf=50,
lower=0., amplitude=1., # not used, but needed for my framework
*args, **kwargs):
self.nbArms = nbArms
self.t = -1
T_0 = int(T_0)
self.T_0 = int(max(1, T_0))
self.fit_every = int(fit_every)
self.meanOf = int(meanOf)
# Unsupervised Learning algorithm
self._was_fitted = False
self._estimator = estimator
self._args = args
self._kwargs = kwargs
self.ests = [ self._estimator(*self._args, **self._kwargs) for _ in range(nbArms) ]
# Store all the observations
self.observations = [ [] for _ in range(nbArms) ]
def __str__(self):
return "UnsupervisedLearning({.__name__}, $T_0={:.3g}$, $T_1={:.3g}$, $M={:.3g}$)".format(self._estimator, self.T_0, self.fit_every, self.meanOf)
def startGame(self):
""" Reinitialize everything."""
self.t = -1
self.ests = [ self._estimator(*self._args, **self._kwargs) for _ in range(self.nbArms) ]
def getReward(self, armId, reward):
""" Store this observation."""
# print(" - At time {}, we saw {} from arm {} ...".format(self.t, reward, armId)) # DEBUG
self.observations[armId].append(reward)
def choice(self):
""" Choose an arm."""
self.t += 1
# Start by sampling each arm a certain number of times
if self.t < self.nbArms * self.T_0:
# print("- First phase: exploring arm {} at time {} ...".format(self.t % self.nbArms, self.t)) # DEBUG
return self.t % self.nbArms
else:
# print("- Second phase: at time {} ...".format(self.t)) # DEBUG
# 1. Fit the Unsupervised Learning on *all* the data observed so far, but do it once in a while only
if not self._was_fitted:
# print(" - Need to first fit the model of each arm with the first {} observations, now of shape {} ...".format(self.fit_every, np.shape(self.observations))) # DEBUG
self.fit(self.observations)
self._was_fitted = True
elif self.t % self.fit_every == 0:
# print(" - Need to refit the model of each arm with {} more observations, now of shape {} ...".format(self.fit_every, np.shape(self.observations))) # DEBUG
self.fit(self.observations)
# 2. Sample a random prediction for next output of the arms
prediction = self.sample_with_mean()
# 3. Use this sample to select next arm to play
best_arm_predicted = np.argmax(prediction)
# print(" - So the best arm seems to be = {} ...".format(best_arm_predicted)) # DEBUG
return best_arm_predicted
# --- Shortcut methods
def fit(self, data):
""" Fit each of the K models, with the data accumulated up-to now."""
for armId in range(self.nbArms):
# print(" - Fitting the #{} model, with observations of shape {} ...".format(armId + 1, np.shape(self.observations[armId]))) # DEBUG
est = self.ests[armId]
est.fit(np.asarray(data[armId]).reshape(-1, 1))
self.ests[armId] = est
def sample(self):
""" Return a vector of random sample from each of the K models."""
return [ float(est.sample()) for est in self.ests ]
def sample_with_mean(self, meanOf=None):
""" Return a vector of random sample from each of the K models, by averaging a lot of samples (reduce variance)."""
if meanOf is None:
meanOf = self.meanOf
return [ float(np.mean(est.sample(meanOf))) for est in self.ests ]
def score(self, obs):
""" Return a vector of scores, for each of the K models on its observation."""
return [ float(est.score(o)) for est, o in zip(self.ests, obs) ]
def estimatedOrder(self):
""" Return the estimate order of the arms, as a permutation on [0..K-1] that would order the arms by increasing means."""
return np.argsort(self.sample_with_mean())
UnsupervisedLearning?
For example, we can chose these values for the numerical parameters :
nbArms = M.nbArms
T_0 = 100
fit_every = 1000
meanOf = 200
And use the same Unsupervised Learning algorithm as previously.
estimator = KernelDensity
kwargs = dict(kernel='gaussian', bandwidth=0.2)
estimator2 = SimpleGaussianKernel
kwargs2 = dict()
This gives the following policy:
policy = UnsupervisedLearning(nbArms, T_0=T_0, fit_every=fit_every, meanOf=meanOf, estimator=estimator, **kwargs)
policy?
We can compare the performance of this UnsupervisedLearning(kde)
policy, on the same Gaussian problem, against three strategies:
EmpiricalMeans
, which only uses the empirical mean estimators $\hat{\mu_k}(t)$. It is known to be insufficient.UCB
, the UCB1 algorithm. It is known to be quite efficient.Thompson
, the Thompson Sampling algorithm. It is known to be very efficient.klUCB
, the kl-UCB algorithm, for Gaussian arms (klucb = klucbGauss
). It is also known to be very efficient.I implemented in the Environment
module an Evaluator
class, very convenient to run experiments of Multi-Armed Bandit games without a sweat.
Let us use it!
from SMPyBandits.Environment import Evaluator
We will start with a small experiment, with a small horizon.
HORIZON = 30000
REPETITIONS = 100
N_JOBS = min(REPETITIONS, 4)
means = [0.45, 0.5, 0.55]
ENVIRONMENTS = [ [Gaussian(mu, sigma=0.2) for mu in means] ]
from SMPyBandits.Policies import EmpiricalMeans, UCB, Thompson, klUCB
from SMPyBandits.Policies import klucb_mapping, klucbGauss as _klucbGauss
sigma = 0.2
# Custom klucb function
def klucbGauss(x, d, precision=0.):
"""klucbGauss(x, d, sig2) with the good variance (= sigma)."""
return _klucbGauss(x, d, sigma)
klucb = klucbGauss
POLICIES = [
# --- Naive algorithms
{
"archtype": EmpiricalMeans,
"params": {}
},
# --- Our algorithm, with two Unsupervised Learning algorithms
{
"archtype": UnsupervisedLearning,
"params": {
"estimator": KernelDensity,
"kernel": 'gaussian',
"bandwidth": sigma,
"T_0": T_0,
"fit_every": fit_every,
"meanOf": meanOf,
}
},
{
"archtype": UnsupervisedLearning,
"params": {
"estimator": SimpleGaussianKernel,
"T_0": T_0,
"fit_every": fit_every,
"meanOf": meanOf,
}
},
# --- Basic UCB1 algorithm
{
"archtype": UCB,
"params": {}
},
# --- Thompson sampling algorithm
{
"archtype": Thompson,
"params": {}
},
# --- klUCB algorithm, with Gaussian klucb function
{
"archtype": klUCB,
"params": {
"klucb": klucb
}
},
]
configuration = {
# --- Duration of the experiment
"horizon": HORIZON,
# --- Number of repetition of the experiment (to have an average)
"repetitions": REPETITIONS,
# --- Parameters for the use of joblib.Parallel
"n_jobs": N_JOBS, # = nb of CPU cores
"verbosity": 6, # Max joblib verbosity
# --- Arms
"environment": ENVIRONMENTS,
# --- Algorithms
"policies": POLICIES,
}
evaluation = Evaluator(configuration)
We asked to repeat the experiment $100$ times, so it will take a while... (about 10 minutes maximum).
from SMPyBandits.Environment import tqdm
%%time
for envId, env in tqdm(enumerate(evaluation.envs), desc="Problems"):
# Evaluate just that env
evaluation.startOneEnv(envId, env)
Now, we can plot some performance measures, like the regret, the best arm selection rate, the average reward etc.
def plotAll(evaluation, envId=0):
evaluation.printFinalRanking(envId)
evaluation.plotRegrets(envId)
evaluation.plotRegrets(envId, semilogx=True)
evaluation.plotRegrets(envId, meanRegret=True)
evaluation.plotBestArmPulls(envId)
evaluation?
plotAll(evaluation)
HORIZON = 30000
REPETITIONS = 100
N_JOBS = min(REPETITIONS, 4)
means = [0.30, 0.35, 0.40, 0.45, 0.5, 0.55, 0.60, 0.65, 0.70]
ENVIRONMENTS = [ [Gaussian(mu, sigma=0.25) for mu in means] ]
POLICIES = [
# --- Our algorithm, with two Unsupervised Learning algorithms
{
"archtype": UnsupervisedLearning,
"params": {
"estimator": KernelDensity,
"kernel": 'gaussian',
"bandwidth": sigma,
"T_0": T_0,
"fit_every": fit_every,
"meanOf": meanOf,
}
},
{
"archtype": UnsupervisedLearning,
"params": {
"estimator": SimpleGaussianKernel,
"T_0": T_0,
"fit_every": fit_every,
"meanOf": meanOf,
}
},
# --- Basic UCB1 algorithm
{
"archtype": UCB,
"params": {}
},
# --- Thompson sampling algorithm
{
"archtype": Thompson,
"params": {}
},
# --- klUCB algorithm, with Gaussian klucb function
{
"archtype": klUCB,
"params": {
"klucb": klucb
}
},
]
configuration = {
# --- Duration of the experiment
"horizon": HORIZON,
# --- Number of repetition of the experiment (to have an average)
"repetitions": REPETITIONS,
# --- Parameters for the use of joblib.Parallel
"n_jobs": N_JOBS, # = nb of CPU cores
"verbosity": 6, # Max joblib verbosity
# --- Arms
"environment": ENVIRONMENTS,
# --- Algorithms
"policies": POLICIES,
}
evaluation2 = Evaluator(configuration)
We asked to repeat the experiment $100$ times, so it will take a while...
%%time
for envId, env in tqdm(enumerate(evaluation2.envs), desc="Problems"):
# Evaluate just that env
evaluation2.startOneEnv(envId, env)
Now, we can plot some performance measures, like the regret, the best arm selection rate, the average reward etc.
plotAll(evaluation2)
Whoo, on this last experiment, the simple UnsupervisedLearning(SimpleGaussianKernel)
works as well as Thompson Sampling (Thompson
) !!
... In fact, it was almost obvious : Thompson Sampling uses a Gamma posterior, while UnsupervisedLearning(SimpleGaussianKernel)
works very similarly to Thompson Sampling (start with a flat kernel, fit it to the data, and to take decision, sample it and play the arm with the highest sample). UnsupervisedLearning(SimpleGaussianKernel)
basically uses a Gaussian posterior, which is obviously better than a Gamma posterior for Gaussian arms!
from SMPyBandits.Arms import Bernoulli
HORIZON = 30000
REPETITIONS = 100
N_JOBS = min(REPETITIONS, 4)
means = [0.30, 0.35, 0.40, 0.45, 0.5, 0.55, 0.60, 0.65, 0.70]
ENVIRONMENTS = [ [Bernoulli(mu) for mu in means] ]
POLICIES = [
# --- Our algorithm, with two Unsupervised Learning algorithms
{
"archtype": UnsupervisedLearning,
"params": {
"estimator": KernelDensity,
"kernel": 'gaussian',
"bandwidth": 0.1,
"T_0": T_0,
"fit_every": fit_every,
"meanOf": meanOf,
}
},
{
"archtype": UnsupervisedLearning,
"params": {
"estimator": SimpleGaussianKernel,
"T_0": T_0,
"fit_every": fit_every,
"meanOf": meanOf,
}
},
# --- Basic UCB1 algorithm
{
"archtype": UCB,
"params": {}
},
# --- Thompson sampling algorithm
{
"archtype": Thompson,
"params": {}
},
]
configuration = {
# --- Duration of the experiment
"horizon": HORIZON,
# --- Number of repetition of the experiment (to have an average)
"repetitions": REPETITIONS,
# --- Parameters for the use of joblib.Parallel
"n_jobs": N_JOBS, # = nb of CPU cores
"verbosity": 6, # Max joblib verbosity
# --- Arms
"environment": ENVIRONMENTS,
# --- Algorithms
"policies": POLICIES,
}
evaluation3 = Evaluator(configuration)
We asked to repeat the experiment $100$ times, so it will take a while...
%%time
for envId, env in tqdm(enumerate(evaluation3.envs), desc="Problems"):
# Evaluate just that env
evaluation3.startOneEnv(envId, env)
Now, we can plot some performance measures, like the regret, the best arm selection rate, the average reward etc.
plotAll(evaluation3)
This small simulation shows that with the appropriate tweaking of parameters, and on reasonably easy Gaussian Multi-Armed Bandit problems, one can use a Unsupervised Learning learning algorithm, being a non on-line algorithm (i.e., updating it at time step $t$ has a time complexity about $\mathcal{O}(K t)$ instead of $\mathcal{O}(K)$).
By tweaking cleverly the algorithm, mainly without refitting the model at every steps (e.g., but once every $T_1 = 1000$ steps), it works as well as the best-possible algorithm, here we compared against Thompson
(Thompson Sampling) and klUCB
(kl-UCB with Gaussian $\mathrm{KL}(x,y)$ function).
When comparing in terms of mean rewards, accumulated rewards, best-arm selection, and regret (loss against the best fixed-arm policy), this UnsupervisedLearning(KernelDensity, ...)
algorithm performs as well as the others.
But in terms of regret, it seems that the profile for UnsupervisedLearning(KernelDensity, ...)
is not asymptotically logarithmic, contrarily to Thompson
and klUCB
(cf. see the first curve above, at the end on the right).
UnsupervisedLearning
part, and $\mathrm{bandwidth}$ for the KernelDensity
part, have all been (manually) tweaked for this setting. For instance, $\mathrm{bandwidth} = \sigma = 0.2$ is the same as the one used for the arms (but in a real-world scenario, this would be unknown), $T_0,T_1$ is adapted to $T$, and $M$ is adapted to $\sigma$ also (to reduce variance of the samples for the models).Another aspect is the time complexity of the UnsupervisedLearning(KernelDensity, ...)
algorithm.
In the simulation above, we saw that it took about $42\;\mathrm{min}$ to do $1000$ experiments of horizon $T = 30000$ (about $8.4 10^{-5} \; \mathrm{s}$ by time step), against $5.5\;\mathrm{min}$ for Thompson Sampling : even with fitting the unsupervised learning model only once every $T_1 = 1000$ steps, it is still about $8$ times slower than Thompson Sampling or UCB !
It is not that much, but still...
time_by_loop_UL_KD = (42 * 60.) / (REPETITIONS * HORIZON)
time_by_loop_UL_KD
time_by_loop_TS = (5.5 * 60.) / (REPETITIONS * HORIZON)
time_by_loop_TS
42 / 5.5
Similarly, the last experiment showed that this UnsupervisedLearning
policy was not so efficient on Bernoulli problems, with a Gaussian kernel.
A better approach could have been to use a Bernoulli "kernel", i.e., fitting a Bernoulli distribution on each arm.
I implemented this for my framework, see here the documentation for
SimpleBernoulliKernel
, but I will not present it here.
This notebook is here to illustrate my SMPyBandits library, for which a complete documentation is available, here at https://smpybandits.github.io/.
That's it for this demo! See you, folks!