First, be sure to be in the main folder, or to have SMPyBandits installed, and import EvaluatorMultiPlayers
from Environment
package:
!pip install SMPyBandits watermark
%load_ext watermark
%watermark -v -m -p SMPyBandits -a "Lilian Besson"
# Local imports
from SMPyBandits.Environment import EvaluatorMultiPlayers, tqdm
We also need arms, for instance Bernoulli
-distributed arm:
# Import arms
from SMPyBandits.Arms import Bernoulli
And finally we need some single-player and multi-player Reinforcement Learning algorithms:
# Import algorithms
from SMPyBandits.Policies import *
from SMPyBandits.PoliciesMultiPlayers import *
# Just improving the ?? in Jupyter. Thanks to https://nbviewer.jupyter.org/gist/minrk/7715212
from __future__ import print_function
from IPython.core import page
def myprint(s):
try:
print(s['text/plain'])
except (KeyError, TypeError):
print(s)
page.page = myprint
For instance, this imported the UCB
algorithm:
UCBalpha?
As well as the CentralizedMultiplePlay
multi-player policy:
CentralizedMultiplePlay?
We also need a collision model. The usual ones are defined in the CollisionModels
package, and the only one we need is the classical one, where two or more colliding users don't receive any rewards.
# Collision Models
from SMPyBandits.Environment.CollisionModels import onlyUniqUserGetsReward
onlyUniqUserGetsReward?
N_JOBS = 4
is the number of cores used to parallelize the code.HORIZON = 10000
REPETITIONS = 100
NB_PLAYERS = 2
N_JOBS = 4
collisionModel = onlyUniqUserGetsReward
We consider in this example $3$ problems, with Bernoulli
arms, of different means.
Note: right now, the multi-environments evaluator does not work well for MP policies, if there is a number different of arms in the scenarios. So I use the same number of arms in all the problems.
ENVIRONMENTS = [ # 1) Bernoulli arms
{ # Scenario 1 from [Komiyama, Honda, Nakagawa, 2016, arXiv 1506.00779]
"arm_type": Bernoulli,
"params": [0.3, 0.4, 0.5, 0.6, 0.7]
},
{ # Classical scenario
"arm_type": Bernoulli,
"params": [0.1, 0.3, 0.5, 0.7, 0.9]
},
{ # Harder scenario
"arm_type": Bernoulli,
"params": [0.005, 0.01, 0.015, 0.84, 0.85]
}
]
We will compare Thompson Sampling against $\mathrm{UCB}_1$, using two different centralized policy:
CentralizedMultiplePlay
is the naive use of a Bandit algorithm for Multi-Player decision making: at every step, the internal decision making process is used to determine not $1$ arm but $M$ to sample. For UCB-like algorithm, the decision making is based on a $\arg\max$ on UCB-like indexes, usually of the form $I_j(t) = X_j(t) + B_j(t)$, where $X_j(t) = \hat{\mu_j}(t) = \sum_{\tau \leq t} r_j(\tau) / N_j(t)$ is the empirical mean of arm $j$, and $B_j(t)$ is a bias term, of the form $B_j(t) = \sqrt{\frac{\alpha \log(t)}{2 N_j(t)}}$.
CentralizedIMP
is very similar, but instead of following the internal decision making for all the decisions, the system uses just the empirical means $X_j(t)$ to determine $M-1$ arms to sample, and the bias-corrected term (i.e., the internal decision making, can be sampling from a Bayesian posterior for instance) is used just for one decision. It is an heuristic, proposed in [Komiyama, Honda, Nakagawa, 2016].
nbArms = len(ENVIRONMENTS[0]['params'])
assert all(len(env['params']) == nbArms for env in ENVIRONMENTS), "Error: not yet support if different environments have different nb of arms"
nbArms
SUCCESSIVE_PLAYERS = [
CentralizedMultiplePlay(NB_PLAYERS, nbArms, UCBalpha, alpha=1).children,
CentralizedIMP(NB_PLAYERS, nbArms, UCBalpha, alpha=1).children,
CentralizedMultiplePlay(NB_PLAYERS, nbArms, Thompson).children,
CentralizedIMP(NB_PLAYERS, nbArms, Thompson).children
]
SUCCESSIVE_PLAYERS
The mother class in this case does all the job here, as we use centralized learning.
OnePlayer = SUCCESSIVE_PLAYERS[0][0]
OnePlayer.nbArms
OneMother = OnePlayer.mother
OneMother
OneMother.nbArms
Complete configuration for the problem:
configuration = {
# --- Duration of the experiment
"horizon": HORIZON,
# --- Number of repetition of the experiment (to have an average)
"repetitions": REPETITIONS,
# --- Parameters for the use of joblib.Parallel
"n_jobs": N_JOBS, # = nb of CPU cores
"verbosity": 6, # Max joblib verbosity
# --- Collision model
"collisionModel": onlyUniqUserGetsReward,
# --- Arms
"environment": ENVIRONMENTS,
# --- Algorithms
"successive_players": SUCCESSIVE_PLAYERS,
}
EvaluatorMultiPlayers
objects¶We will need to create several objects, as the simulation first runs one policy against each environment, and then aggregate them to compare them.
%%time
N_players = len(configuration["successive_players"])
# List to keep all the EvaluatorMultiPlayers objects
evs = [None] * N_players
evaluators = [[None] * N_players] * len(configuration["environment"])
for playersId, players in tqdm(enumerate(configuration["successive_players"]), desc="Creating"):
print("\n\nConsidering the list of players :\n", players)
conf = configuration.copy()
conf['players'] = players
evs[playersId] = EvaluatorMultiPlayers(conf)
Now we can simulate the $2$ environments, for the successive policies. That part can take some time.
%%time
for playersId, evaluation in tqdm(enumerate(evs), desc="Policies"):
for envId, env in tqdm(enumerate(evaluation.envs), desc="Problems"):
# Evaluate just that env
evaluation.startOneEnv(envId, env)
# Storing it after simulation is done
evaluators[envId][playersId] = evaluation
And finally, visualize them, with the plotting method of a EvaluatorMultiPlayers
object:
def plotAll(evaluation, envId):
evaluation.printFinalRanking(envId)
# Rewards
evaluation.plotRewards(envId)
# Fairness
#evaluation.plotFairness(envId, fairness="STD")
# Centralized regret
evaluation.plotRegretCentralized(envId, subTerms=True)
#evaluation.plotRegretCentralized(envId, semilogx=True, subTerms=True)
# Number of switches
#evaluation.plotNbSwitchs(envId, cumulated=False)
evaluation.plotNbSwitchs(envId, cumulated=True)
# Frequency of selection of the best arms
evaluation.plotBestArmPulls(envId)
# Number of collisions - not for Centralized* policies
#evaluation.plotNbCollisions(envId, cumulated=False)
#evaluation.plotNbCollisions(envId, cumulated=True)
# Frequency of collision in each arm
#evaluation.plotFrequencyCollisions(envId, piechart=True)
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (12.4, 7)
$\mu = [0.3, 0.4, 0.5, 0.6, 0.7]$ was an easy Bernoulli problem.
for playersId in tqdm(range(len(evs)), desc="Policies"):
evaluation = evaluators[0][playersId]
plotAll(evaluation, 0)
$\mu = [0.1, 0.3, 0.5, 0.7, 0.9]$ was an easier Bernoulli problem, with larger gap $\Delta = 0.2$.
for playersId in tqdm(range(len(evs)), desc="Policies"):
evaluation = evaluators[1][playersId]
plotAll(evaluation, 1)
$\mu = [0.005, 0.01, 0.015, 0.84, 0.85]$ is an harder Bernoulli problem, as there is a huge gap between suboptimal and optimal arms.
for playersId in tqdm(range(len(evs)), desc="Policies"):
evaluation = evaluators[2][playersId]
plotAll(evaluation, 2)
def plotCombined(e0, eothers, envId):
# Centralized regret
e0.plotRegretCentralized(envId, evaluators=eothers)
# Fairness
e0.plotFairness(envId, fairness="STD", evaluators=eothers)
# Number of switches
e0.plotNbSwitchsCentralized(envId, cumulated=True, evaluators=eothers)
# Number of collisions - not for Centralized* policies
#e0.plotNbCollisions(envId, cumulated=True, evaluators=eothers)
N = len(configuration["environment"])
for envId, env in enumerate(configuration["environment"]):
e0, eothers = evaluators[envId][0], evaluators[envId][1:]
plotCombined(e0, eothers, envId)
That's it for this demo!