# Math Utilities¶

## Linear Algebra¶

Various linear algebra utilities.

Notes on the fast Hadamard transform (Alistair Reid, NICTA 2015):

The Hadamard transform is a recursive application of sums and differences on a vector of length 2^n. The log(n) recursive application allow computation in n log(n) operations instead of the naive n^2 per vector that would result from computing and then multiplying a Hadamard matrix.

Note: the Walsh ordering is naturally computed by the recursive algorithm. The sequence ordering requires a re-ordering afterwards incurring a similar overhead to the original computation.

# Larger tests against calling Julia: [length 4*512 vectors:]
M = np.argsort(np.sin(np.arange(64))).astype(float)
Julia: 570us (sequence ordered)
Al's single optimised: 560us (sequence ordered), 30us (natural)
Al's basic vectorised: 1370us (sequence ordered)

revrand.mathfun.linalg.cho_log_det(L)

Compute the log of the determinant of $$A$$, given its (upper or lower) Cholesky factorization $$LL^T$$.

Parameters: L (ndarray) – an upper or lower Cholesky factor

Examples

>>> A = np.array([[ 2, -1,  0],
...               [-1,  2, -1],
...               [ 0, -1,  2]])

>>> Lt = cholesky(A)
>>> np.isclose(cho_log_det(Lt), np.log(np.linalg.det(A)))
True

>>> L = cholesky(A, lower=True)
>>> np.isclose(cho_log_det(L), np.log(np.linalg.det(A)))
True

revrand.mathfun.linalg.hadamard(Y, ordering=True)

Very fast Hadamard transform for single vector at a time.

Parameters: Y (ndarray) – the n*2^k data to be 1d hadamard transformed ordering (bool, optional) – reorder from Walsh to sequence H – hadamard transformed data. ndarray

Examples

from https://en.wikipedia.org/wiki/Hadamard_transform with normalisation

>>> y = np.array([[1, 0, 1, 0, 0, 1, 1, 0]])
>>> hadamard(y, ordering=False)
array([[ 0.5 ,  0.25,  0.  , -0.25,  0.  ,  0.25,  0.  ,  0.25]])
>>> hadamard(y, ordering=True)
array([[ 0.5 ,  0.  ,  0.  ,  0.  , -0.25,  0.25,  0.25,  0.25]])

revrand.mathfun.linalg.solve_posdef(A, b)

Solve the system $$A X = b$$ for $$X$$ where $$A$$ is a positive semi-definite matrix.

This first tries cholesky, and if numerically unstable with solve using a truncated SVD (see solve_posdef_svd).

The log-determinant of $$A$$ is also returned since it requires minimal overhead.

Parameters: A (ndarray) – A positive semi-definite matrix. b (ndarray) – An array or matrix X (ndarray) – The result of $$X = A^-1 b$$ logdet (float) – The log-determinant of $$A$$
revrand.mathfun.linalg.svd_log_det(s)

Compute the log of the determinant of $$A$$, given its singular values from an SVD factorisation (i.e. s from U, s, Ut = svd(A)).

Parameters: s (ndarray) – the singular values from an SVD decomposition

Examples

>>> A = np.array([[ 2, -1,  0],
...               [-1,  2, -1],
...               [ 0, -1,  2]])

>>> _, s, _ = np.linalg.svd(A)
>>> np.isclose(svd_log_det(s), np.log(np.linalg.det(A)))
True

revrand.mathfun.linalg.svd_solve(U, s, V, b, s_tol=1e-15)

Solve the system $$A X = b$$ for $$X$$.

Here $$A$$ is a positive semi-definite matrix using the singular value decomposition. This truncates the SVD so only dimensions corresponding to non-negative and sufficiently large singular values are used.

Parameters: U (ndarray) – The U factor of U, s, V = svd(A) positive semi-definite matrix. s (ndarray) – The s factor of U, s, V = svd(A) positive semi-definite matrix. V (ndarray) – The V factor of U, s, V = svd(A) positive semi-definite matrix. b (ndarray) – An array or matrix s_tol (float) – Cutoff for small singular values. Singular values smaller than s_tol are clamped to s_tol. X (ndarray) – The result of $$X = A^-1 b$$ okind (ndarray) – The indices of s that are kept in the factorisation

## Special Functions¶

Special Numerical Operations.

revrand.mathfun.special.logsumexp(X, axis=0)

Log-sum-exp trick for matrix X for summation along a specified axis.

This performs the following operation in a stable fashion,

$\log \sum^K_{k=1} \exp\{x_k\}$
Parameters: X (ndarray) – 2D array of shape (N, D) to apply the log-sum-exp trick. axis (int, optional) – Axis to apply the summation along (works the same as axis in numpy.sum). lseX – results of applying the log-sum-exp trick, this will be shape (D,) if axis=0 or shape (N,) if axis=1. ndarray
revrand.mathfun.special.softmax(X, axis=0)

Pass X through a softmax function in a numerically stable way using the log-sum-exp trick.

This transformation is:

$\frac{\exp\{X_k\}}{\sum^K_{j=1} \exp\{X_j\}}$

and is appliedx to each row/column, k, of X.

Parameters: X (ndarray) – 2D array of shape (N, D) to apply the log-sum-exp trick. axis (int, optional) – Axis to apply the summation along (works the same as axis in numpy.sum). smX – results of applying the log-sum-exp trick, this will be shape (N, D), and each row will sum to 1 if axis=1 or each column will sum to 1 if axis=0. ndarray
revrand.mathfun.special.softplus(X)

Pass X through a soft-plus function, , in a numerically stable way (using the log-sum-exp trick).

The softplus transformation is:

$\log(1 + \exp\{X\})$
Parameters: X (ndarray) – shape (N,) array or shape (N, D) array of data. spX – array of same shape of X with the result of softmax(X). ndarray