This small Jupyter notebook presents an experiment, in the context of Multi-Armed Bandit problems (MAB).
I am trying to answer a simple question:
"Can we use generic black-box Bayesian optimization algorithm, like a Gaussian process or Bayesian random forest, instead of MAB algorithms like UCB or Thompson Sampling ?
I will use my SMPyBandits library, for which a complete documentation is available, here at https://smpybandits.github.io/, and the scikit-optimize package (skopt).
First, be sure to be in the main folder, or to have installed SMPyBandits
, and import the MAB
class from the Environment
package:
import numpy as np
!pip install SMPyBandits watermark
%load_ext watermark
%watermark -v -m -p SMPyBandits -a "Lilian Besson"
from SMPyBandits.Environment import MAB
And also, import the Gaussian
class to create Gaussian-distributed arms.
from SMPyBandits.Arms import Gaussian
# Just improving the ?? in Jupyter. Thanks to https://nbviewer.jupyter.org/gist/minrk/7715212
from __future__ import print_function
from IPython.core import page
def myprint(s):
try:
print(s['text/plain'])
except (KeyError, TypeError):
print(s)
page.page = myprint
Gaussian?
Let create a simple bandit problem, with 3 arms, and visualize an histogram showing the repartition of rewards.
means = [0.45, 0.5, 0.55]
M = MAB(Gaussian(mu, sigma=0.2) for mu in means)
_ = M.plotHistogram(horizon=10000000)
As we can see, the rewards of the different arms are close. It won't be easy to distinguish them.
I will present directly how to use any black-box optimization algorithm, following skopt
"ask-and-tell" API.
The optimization algorithm, opt
, needs two methods:
opt.tell
, used like opt.tell([armId], loss)
, to give an observation of a certain "loss" (loss = - reward
) from arm #armId
to the algorithm.opt.ask
, used like asked = opt.ask()
, to ask the algorithm which arm should be sampled first.Let use a simple Black-Box Bayesian algorithm, implemented in the scikit-optimize (skopt
) package: RandomForestRegressor
.
from skopt.learning import RandomForestRegressor
First, we need to create a model.
our_est = RandomForestRegressor()
our_est?
from skopt import Optimizer
def arms_optimizer(nbArms, est):
return Optimizer([
list(range(nbArms)) # Categorical dimensions: arm index!
],
est(),
acq_optimizer="sampling",
n_random_starts=3 * nbArms # Sure ?
)
our_opt = arms_optimizer(M.nbArms, RandomForestRegressor)
our_opt?
In code, this gives the following:
getReward(arm, reward)
method gives loss = 1 - reward
to the optimization process, with opt.tell
method,choice()
simply calls opt.ask()
.Note that the Bayesian optimization takes place with an input space of categorial data: instead of optimizing in $\mathbb{R}$ or $\mathbb{R}^K$ (for $K$ arms), the input space is a categorical representation of $\{1,\dots,K\}$.
class BlackBoxOpt(object):
"""Black-box Bayesian optimizer for Multi-Armed Bandit, using Gaussian processes.
- **Warning**: still highly experimental! Very slow!
"""
def __init__(self, nbArms,
opt=arms_optimizer, est=RandomForestRegressor,
lower=0., amplitude=1., # not used, but needed for my framework
):
self.nbArms = nbArms #: Number of arms of the MAB problem.
self.t = -1 #: Current time.
# Black-box optimizer
self._opt = opt # Store it
self._est = est # Store it
self.opt = opt(nbArms, est) #: The black-box optimizer to use, initialized from the other arguments
# Other attributes
self.lower = lower #: Known lower bounds on the rewards.
self.amplitude = amplitude #: Known amplitude of the rewards.
# --- Easy methods
def __str__(self):
return "BlackBoxOpt({}, {})".format(self._opt.__name__, self._est.__name__)
def startGame(self):
""" Reinitialize the black-box optimizer."""
self.t = -1
self.opt = self._opt(self.nbArms, self._est) # The black-box optimizer to use, initialized from the other arguments
def getReward(self, armId, reward):
""" Store this observation `reward` for that arm `armId`.
- In fact, :class:`skopt.Optimizer` is a *minimizer*, so `loss=1-reward` is stored, to maximize the rewards by minimizing the losses.
"""
reward = (reward - self.lower) / self.amplitude # project the reward to [0, 1]
loss = 1. - reward # flip
return self.opt.tell([armId], loss)
def choice(self):
r""" Choose an arm, according to the black-box optimizer."""
self.t += 1
asked = self.opt.ask()
# That's a np.array of int, as we use Categorical input dimension!
arm = int(np.round(asked[0]))
return arm
BlackBoxOpt?
For example, for the problem $M$ defined above, for $K=3$ arms, this gives the following policy:
policy = BlackBoxOpt(M.nbArms)
policy?
We can compare the performance of this BlackBoxOpt
policy, using Random Forest regression, on the same Gaussian problem, against three strategies:
EmpiricalMeans
, which only uses the empirical mean estimators $\hat{\mu_k}(t)$. It is known to be insufficient.UCB
, the UCB1 algorithm. It is known to be quite efficient.Thompson
, the Thompson Sampling algorithm. It is known to be very efficient.klUCB
, the kl-UCB algorithm, for Gaussian arms (klucb = klucbGauss
). It is also known to be very efficient.I implemented in the Environment
module an Evaluator
class, very convenient to run experiments of Multi-Armed Bandit games without a sweat.
Let us use it!
from SMPyBandits.Environment import Evaluator
We will start with a small experiment, with a small horizon $T = 2000$ and only $20$ repetitions.
(we should do more, but it is very slow due to BlackBoxOpt
...)
HORIZON = 2000
REPETITIONS = 20
N_JOBS = min(REPETITIONS, 3)
means = [0.45, 0.5, 0.55]
ENVIRONMENTS = [ [Gaussian(mu, sigma=0.2) for mu in means] ]
from SMPyBandits.Policies import EmpiricalMeans, UCB, Thompson, klUCB
from SMPyBandits.Policies import klucb_mapping, klucbGauss as _klucbGauss
sigma = 0.2
# Custom klucb function
def klucbGauss(x, d, precision=0.):
"""klucbGauss(x, d, sig2) with the good variance (= sigma)."""
return _klucbGauss(x, d, sigma)
klucb = klucbGauss
POLICIES = [
# --- Naive algorithms
{
"archtype": EmpiricalMeans,
"params": {}
},
# --- Our algorithm, with two Unsupervised Learning algorithms
{
"archtype": BlackBoxOpt,
"params": {}
},
# --- Basic UCB1 algorithm
{
"archtype": UCB,
"params": {}
},
# --- Thompson sampling algorithm
{
"archtype": Thompson,
"params": {}
},
# --- klUCB algorithm, with Gaussian klucb function
{
"archtype": klUCB,
"params": {
"klucb": klucb
}
},
]
configuration = {
# --- Duration of the experiment
"horizon": HORIZON,
# --- Number of repetition of the experiment (to have an average)
"repetitions": REPETITIONS,
# --- Parameters for the use of joblib.Parallel
"n_jobs": N_JOBS, # = nb of CPU cores
"verbosity": 6, # Max joblib verbosity
# --- Arms
"environment": ENVIRONMENTS,
# --- Algorithms
"policies": POLICIES,
}
evaluation = Evaluator(configuration)
We asked to repeat the experiment $20$ times, so it will take a while... (about 100 minutes maximum).
from SMPyBandits.Environment import tqdm # just a pretty loop
%%time
for envId, env in tqdm(enumerate(evaluation.envs), desc="Problems"):
# Evaluate just that env
evaluation.startOneEnv(envId, env)
Now, we can plot some performance measures, like the regret, the best arm selection rate, the average reward etc.
def plotAll(evaluation, envId=0):
evaluation.printFinalRanking(envId)
evaluation.plotRegrets(envId)
evaluation.plotRegrets(envId, semilogx=True)
evaluation.plotRegrets(envId, meanRegret=True)
evaluation.plotBestArmPulls(envId)
evaluation?
plotAll(evaluation)
This second experiment will be similar, except we consider more arms. As they are all very close to each other, with a gap $\Delta = 0.05$, it gets much harder!
HORIZON = 2000
REPETITIONS = 20
N_JOBS = min(REPETITIONS, 4)
means = [0.30, 0.35, 0.40, 0.45, 0.5, 0.55, 0.60, 0.65, 0.70]
ENVIRONMENTS = [ [Gaussian(mu, sigma=0.25) for mu in means] ]
POLICIES = [
# --- Our algorithm, with two Unsupervised Learning algorithms
{
"archtype": BlackBoxOpt,
"params": {}
},
# --- Basic UCB1 algorithm
{
"archtype": UCB,
"params": {}
},
# --- Thompson sampling algorithm
{
"archtype": Thompson,
"params": {}
},
# --- klUCB algorithm, with Gaussian klucb function
{
"archtype": klUCB,
"params": {
"klucb": klucb
}
},
]
configuration = {
# --- Duration of the experiment
"horizon": HORIZON,
# --- Number of repetition of the experiment (to have an average)
"repetitions": REPETITIONS,
# --- Parameters for the use of joblib.Parallel
"n_jobs": N_JOBS, # = nb of CPU cores
"verbosity": 6, # Max joblib verbosity
# --- Arms
"environment": ENVIRONMENTS,
# --- Algorithms
"policies": POLICIES,
}
evaluation2 = Evaluator(configuration)
We asked to repeat the experiment $20$ times, so it will take a while...
%%time
for envId, env in tqdm(enumerate(evaluation2.envs), desc="Problems"):
# Evaluate just that env
evaluation2.startOneEnv(envId, env)
Now, we can plot some performance measures, like the regret, the best arm selection rate, the average reward etc.
plotAll(evaluation2)
Whoo, on this last experiment, the BlackBoxOpt
policy works way better than the three other policies !!
from SMPyBandits.Arms import Bernoulli
HORIZON = 2000
REPETITIONS = 20
N_JOBS = min(REPETITIONS, 4)
means = [0.30, 0.35, 0.40, 0.45, 0.5, 0.55, 0.60, 0.65, 0.70]
ENVIRONMENTS = [ [Bernoulli(mu) for mu in means] ]
klucbBern = klucb_mapping['Bernoulli']
POLICIES = [
# --- Our algorithm, with two Unsupervised Learning algorithms
{
"archtype": BlackBoxOpt,
"params": {}
},
# --- Basic UCB1 algorithm
{
"archtype": UCB,
"params": {}
},
# --- Thompson sampling algorithm
{
"archtype": Thompson,
"params": {}
},
# --- klUCB algorithm, with Bernoulli klucb function
# https://smpybandits.github.io/docs/Arms.kullback.html#Arms.kullback.klucbBern
{
"archtype": klUCB,
"params": {
"klucb": klucbBern
}
},
]
configuration = {
# --- Duration of the experiment
"horizon": HORIZON,
# --- Number of repetition of the experiment (to have an average)
"repetitions": REPETITIONS,
# --- Parameters for the use of joblib.Parallel
"n_jobs": N_JOBS, # = nb of CPU cores
"verbosity": 6, # Max joblib verbosity
# --- Arms
"environment": ENVIRONMENTS,
# --- Algorithms
"policies": POLICIES,
}
evaluation3 = Evaluator(configuration)
We asked to repeat the experiment $20$ times, so it will take a while...
%%time
for envId, env in tqdm(enumerate(evaluation3.envs), desc="Problems"):
# Evaluate just that env
evaluation3.startOneEnv(envId, env)
Now, we can plot some performance measures, like the regret, the best arm selection rate, the average reward etc.
plotAll(evaluation3)
We can see that BlackBoxOpt
with RandomForestRegressor
also has very good performances on Bernoulli problems!
This small simulation shows that with the appropriate tweaking of parameters, and on reasonably easy Gaussian Multi-Armed Bandit problems, one can use a Black-Box Bayesian optimization algorithm, with an "ask-and-tell" API to make it on-line.
Without the need of any parameter tweaking or model selection steps, the BlackBoxOpt
policy was quite efficient (using the default Optimizer
and the RandomForestRegressor
, from skopt
package).
When comparing in terms of mean rewards, accumulated rewards, best-arm selection, and regret (loss against the best fixed-arm policy), this BlackBoxOpt
algorithm performs as well as the others.
But in terms of regret, it seems that the profile for BlackBoxOpt
is not asymptotically logarithmic, contrarily to Thompson
and klUCB
(cf. see the first curve above, at the end on the right).
ExtraTreesRegressor
worked similarly but it is slower, and GaussianProcessRegressor
was failing, don't really know why. I think it is not designed to work with Categorical inputs.Another aspect is the time complexity of the BlackBoxOpt
policy.
In the simulation above, we saw that it took way much time than the online bandit algorithms, like UCB
, klUCB
or Thompson
sampling.
This notebook is here to illustrate my SMPyBandits library, for which a complete documentation is available, here at https://smpybandits.github.io/.
See the discussion on
skopt
GitHub issues #407.That's it for this demo! See you, folks!