Policies.Posterior.Gauss module¶
Manipulate a posterior of Gaussian experiments, which happens to also be a Gaussian distribution if the prior is Gaussian. Easy peasy!
Warning
TODO I have to test it!
Reference: [[Further optimal regret bounds for Thompson sampling, S. Agrawal and N. Goyal, In Artificial Intelligence and Statistics, pages 99–107, 2013.](http://proceedings.mlr.press/v31/agrawal13a.pdf)]
-
Policies.Posterior.Gauss.
normalvariate
()¶ normal(loc=0.0, scale=1.0, size=None)
Draw random samples from a normal (Gaussian) distribution.
The probability density function of the normal distribution, first derived by De Moivre and 200 years later by both Gauss and Laplace independently 2, is often called the bell curve because of its characteristic shape (see the example below).
The normal distributions occurs often in nature. For example, it describes the commonly occurring distribution of samples influenced by a large number of tiny, random disturbances, each with its own unique distribution 2.
- locfloat or array_like of floats
Mean (“centre”) of the distribution.
- scalefloat or array_like of floats
Standard deviation (spread or “width”) of the distribution. Must be non-negative.
- sizeint or tuple of ints, optional
Output shape. If the given shape is, e.g.,
(m, n, k)
, thenm * n * k
samples are drawn. If size isNone
(default), a single value is returned ifloc
andscale
are both scalars. Otherwise,np.broadcast(loc, scale).size
samples are drawn.
- outndarray or scalar
Drawn samples from the parameterized normal distribution.
- scipy.stats.normprobability density function, distribution or
cumulative density function, etc.
The probability density for the Gaussian distribution is
\[p(x) = \frac{1}{\sqrt{ 2 \pi \sigma^2 }} e^{ - \frac{ (x - \mu)^2 } {2 \sigma^2} },\]where \(\mu\) is the mean and \(\sigma\) the standard deviation. The square of the standard deviation, \(\sigma^2\), is called the variance.
The function has its peak at the mean, and its “spread” increases with the standard deviation (the function reaches 0.607 times its maximum at \(x + \sigma\) and \(x - \sigma\) 2). This implies that numpy.random.normal is more likely to return samples lying close to the mean, rather than those far away.
- 1
Wikipedia, “Normal distribution”, https://en.wikipedia.org/wiki/Normal_distribution
- 2(1,2,3)
P. R. Peebles Jr., “Central Limit Theorem” in “Probability, Random Variables and Random Signal Principles”, 4th ed., 2001, pp. 51, 51, 125.
Draw samples from the distribution:
>>> mu, sigma = 0, 0.1 # mean and standard deviation >>> s = np.random.normal(mu, sigma, 1000)
Verify the mean and the variance:
>>> abs(mu - np.mean(s)) 0.0 # may vary
>>> abs(sigma - np.std(s, ddof=1)) 0.1 # may vary
Display the histogram of the samples, along with the probability density function:
>>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, 30, density=True) >>> plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) * ... np.exp( - (bins - mu)**2 / (2 * sigma**2) ), ... linewidth=2, color='r') >>> plt.show()
Two-by-four array of samples from N(3, 6.25):
>>> np.random.normal(3, 2.5, size=(2, 4)) array([[-4.49401501, 4.00950034, -1.81814867, 7.29718677], # random [ 0.39924804, 4.68456316, 4.99394529, 4.84057254]]) # random
-
class
Policies.Posterior.Gauss.
Gauss
(mu=0.0)[source]¶ Bases:
Policies.Posterior.Posterior.Posterior
Manipulate a posterior of Gaussian experiments, which happens to also be a Gaussian distribution if the prior is Gaussian.
The posterior distribution is a \(\mathcal{N}(\hat{\mu_k}(t), \hat{\sigma_k}^2(t))\), where
\[\hat{\mu_k}(t) &= \frac{X_k(t)}{N_k(t)}, \hat{\sigma_k}^2(t) &= \frac{1}{N_k(t)}.\]Warning
This works only for prior with a variance \(\sigma^2=1\) !
-
__init__
(mu=0.0)[source]¶ Create a posterior assuming the prior is \(\mathcal{N}(\mu, 1)\).
The prior is centered (\(\mu=1\)) by default, but parameter
mu
can be used to change this default.
-
mu
= None¶ Parameter \(\mu\) of the posterior
-
sigma
= None¶ The parameter \(\sigma\) of the posterior
-
reset
(mu=None)[source]¶ Reset the for parameters \(\mu, \sigma\), as when creating a new Gauss posterior.
-
sample
()[source]¶ Get a random sample \((x, \sigma^2)\) from the Gaussian posterior (using
scipy.stats.invgamma()
for the variance \(\sigma^2\) parameter andnumpy.random.normal()
for the mean \(x\)).Used only by
Thompson
Sampling andAdBandits
so far.
-
quantile
(p)[source]¶ Return the p-quantile of the Gauss posterior.
Note
It now works fine with
Policies.BayesUCB
with Gauss posteriors, even if it is MUCH SLOWER than the Bernoulli posterior (Gamma
).
-
update
(obs)[source]¶ Add an observation \(x\) or a vector of observations, assumed to be drawn from an unknown normal distribution.
-
__module__
= 'Policies.Posterior.Gauss'¶
-