Policies.klUCB module¶
The generic KL-UCB policy for one-parameter exponential distributions.
By default, it assumes Bernoulli arms.
Reference: [Garivier & Cappé - COLT, 2011](https://arxiv.org/pdf/1102.2490.pdf).
-
Policies.klUCB.
c
= 1.0¶ default value, as it was in pymaBandits v1.0
-
Policies.klUCB.
TOLERANCE
= 0.0001¶ Default value for the tolerance for computing numerical approximations of the kl-UCB indexes.
-
class
Policies.klUCB.
klUCB
(nbArms, tolerance=0.0001, klucb=CPUDispatcher(<function klucbBern>), c=1.0, lower=0.0, amplitude=1.0)[source]¶ Bases:
Policies.IndexPolicy.IndexPolicy
The generic KL-UCB policy for one-parameter exponential distributions.
By default, it assumes Bernoulli arms.
Reference: [Garivier & Cappé - COLT, 2011](https://arxiv.org/pdf/1102.2490.pdf).
-
__init__
(nbArms, tolerance=0.0001, klucb=CPUDispatcher(<function klucbBern>), c=1.0, lower=0.0, amplitude=1.0)[source]¶ New generic index policy.
nbArms: the number of arms,
lower, amplitude: lower value and known amplitude of the rewards.
-
c
= None¶ Parameter c
-
klucb
= None¶ kl function to use
-
klucb_vect
= None¶ kl function to use, in a vectorized way using
numpy.vectorize()
.
-
tolerance
= None¶ Numerical tolerance
-
computeIndex
(arm)[source]¶ Compute the current index, at time t and after \(N_k(t)\) pulls of arm k:
\[\begin{split}\hat{\mu}_k(t) &= \frac{X_k(t)}{N_k(t)}, \\ U_k(t) &= \sup\limits_{q \in [a, b]} \left\{ q : \mathrm{kl}(\hat{\mu}_k(t), q) \leq \frac{c \log(t)}{N_k(t)} \right\},\\ I_k(t) &= U_k(t).\end{split}\]If rewards are in \([a, b]\) (default to \([0, 1]\)) and \(\mathrm{kl}(x, y)\) is the Kullback-Leibler divergence between two distributions of means x and y (see
Arms.kullback
), and c is the parameter (default to 1).
-
__module__
= 'Policies.klUCB'¶