Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Faster computation of matrix inverses and determinants with discrete x #120

Open
evanmunro opened this issue Jul 28, 2020 · 1 comment
Open

Comments

@evanmunro
Copy link

evanmunro commented Jul 28, 2020

@brad-ross and I are working on a paper that uses Stheno for GP related computation.

If y = f(x) + e, f(x)~GP(m(x), k(x, x)), and x (known as the "running variable" in our setting) is discrete with d unique values, then we think there are ways to speed up the computation of posterior, logpdf significantly. Below is a rough writeup that describes how this would be done for inverting the matrix required to compute the estimator: uses the matrix inversion lemma to change an O(n^3) operation to O(n + nd + d^3) . Similar derivation can be done to use determinant lemma to reduce the computation of the logpdf to depend on d instead of n.

Inversion Speed-Up

@brad-ross is willing to make an attempt at implementing this, but also wanted see if there are any thoughts about this approach for speeding up computation (and suggestions again on code structure).

@willtebbutt
Copy link
Member

willtebbutt commented Aug 2, 2020

Ah I see. This certainly makes a lot of sense -- (usually-approximate) low-rank structure is utilised regularly throughout the GP world, but this seems like a nice special case.

So the key question here is how to implement this so that you can "know" that the structure is present. e.g. where ColVecs "knows" that what you really meant by a matrix of data was a vector-of-vectors where each vector is a column of the matrix, you'll want a RepeatedInputs(name up to you of course, but this seems reasonable) type that's parametrised in terms of the unique inputs and N (in your notation above).

At that point you've got structure that you can dispatch on, so you then have to decide at which level of abstraction you wish to dispatch, and you've then got a couple of options.

Option 1

Firstly, if you take a look at the logpdf and rand implementations, you'll see that they're implemented in terms of

  1. the logdet of the covariance matrix, and
  2. C.U' \ data where C is the Cholesky of the covariance matrix

So the first option is to implement a new subtype AbstractMatrix called LowRankMatrix or something and dispatch at the level of cov, so that whenever you ask for the covariance of a FiniteGP / between two FiniteGPs whose inputs x are both RepeatedInputs, you spit out a LowRankMatrix. You could then simply define cholesky(::LowRankMatrix) to spit out something that also knows about this structure and then implement logdet and C.U' \ data on that.

Option 2

Dispatch logpdf and rand on the type of x, so that if x isa RepeatedInputs you exploit the low-rank structure to perform inference.

Option 3 (related to option 2)

Have you considered that you could think of what you're proposing as a special case of Bayesian linear regression where the number of features equals the number of unique data points, the prior over the coefficients is the covariance matrix at the unique points, and the "feature matrix" is N (or maybe transpose(N), depending on your conventions)?

So you could just utilise BayesianLinearRegressors.jl and get all of this for free? Indeed, it might make a lot of sense to implement option 2 in terms of this.

This is the kind of thing that would actually be quite nice to have generally, because it would be nice to give people an escape hatch to handle repeated inputs if they know that they're there, so it would be cool if you could contribute your code back.

I think I would be most in favour of option 2 implemented in terms of option 3, as it seems to be to be the most clean approach. While implementing the linear algebra primitives (option 1) necessary to exploit this structure seems like a good idea in principle, I suspect that there will be a lot of boilerplate in practice and it'll just wind up being quite complicated.

edit: maybe RepeatedElementsVector is a better name than RepeatedInputs since a subtype of AbstractVector doesn't intrinsically have anything to do with being a collection of inputs to something, it should really stand on its own as a vector.

edit2: you'll also need to think about how you'll exploit this structure to make predictions. You might be better off implementing this using the AbstractGPs interface, because it makes it more straightforward to dispatch on the type of the inputs. The conditioning mechanism in Stheno actually makes this quite complicated at the minute, which is one of the reasons why I'm going to transition it over to using that interface "soon". It really depends on whether you need the probabilistic-programming bit of Stheno, or whether you're fine without it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants