Standard Linear Model¶

The standard Bayesian linear regression model.

By using the appropriate bases, this will also yield an implementation of the “A la Carte” GP [1].

 [1] Yang, Z., Smola, A. J., Song, L., & Wilson, A. G. “A la Carte – Learning Fast Kernels”. Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, pp. 1098-1106, 2015.
class revrand.slm.StandardLinearModel(basis=LinearBasis(onescol=True, regularizer=Parameter(value=1.0, bounds=Positive(upper=None), shape=())), var=Parameter(value=1.0, bounds=Positive(upper=None), shape=()), tol=1e-08, maxiter=1000, nstarts=100, random_state=None)

Bayesian Standard linear model.

Parameters: basis (Basis) – A basis object, see the basis_functions module. var (Parameter, optional) – observation variance initial value. tol (float, optional) – optimiser function tolerance convergence criterion. maxiter (int, optional) – maximum number of iterations for the optimiser. nstarts (int, optional) – if there are any parameters with distributions as initial values, this determines how many random candidate starts shoulds be evaluated before commencing optimisation at the best candidate. random_state (None, int or RandomState, optional) – random seed (mainly for random starts)
fit(X, y)

Learn the hyperparameters of a Bayesian linear regressor.

Parameters: X (ndarray) – (N, d) array input dataset (N samples, d dimensions). y (ndarray) – (N,) array targets (N samples) self

Notes

This actually optimises the evidence lower bound on log marginal likelihood, rather than log marginal likelihood directly. In the case of a full posterior convariance matrix, this bound is tight and the exact solution will be found (modulo local minima for the hyperparameters).

This uses the python logging module for displaying learning status. To view these messages have something like,

import logging
logging.basicConfig(level=logging.INFO)
log = logging.getLogger(__name__)


predict(X)
predict_moments(X)