You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently when calling predict on a Gaussian process GPJax computes the entire covariance matrix. However, when calling with many test points this can be inefficient for downstream tasks which only require the mean and variance at the test points (and hence only require the diagonal elements of the covariance matrix). It would be good to come up with a solution which avoids unnecessary computation of elements in the covariance matrix. For instance, perhaps with some sort of lazy evaluation elements in the covariance matrix could only be computed when necessary. Alternatively, there could be an efficient way of doing this using Cola.
The text was updated successfully, but these errors were encountered:
Currently when calling
predict
on a Gaussian process GPJax computes the entire covariance matrix. However, when calling with many test points this can be inefficient for downstream tasks which only require the mean and variance at the test points (and hence only require the diagonal elements of the covariance matrix). It would be good to come up with a solution which avoids unnecessary computation of elements in the covariance matrix. For instance, perhaps with some sort of lazy evaluation elements in the covariance matrix could only be computed when necessary. Alternatively, there could be an efficient way of doing this using Cola.The text was updated successfully, but these errors were encountered: