Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adaptive nugget gradient computation #208

Open
edaub opened this issue Nov 16, 2021 · 0 comments
Open

Adaptive nugget gradient computation #208

edaub opened this issue Nov 16, 2021 · 0 comments

Comments

@edaub
Copy link
Collaborator

edaub commented Nov 16, 2021

In certain circumstances, the analytical derivative computation with an adaptive nugget can be somewhat off the value found by finite differences (particularly for the derivative with respect to the covariance scale). This is because when the nugget is just large enough to factorize the matrix, a small change in another parameter can cause the nugget found when computing the log posterior with finite differencing to change a bit, which gets mixed up in that derivative component.

I don't think there is anything to do about this -- the derivative is only used in finding the parameters that minimize the posterior, so it doesn't affect the correctness of any predictions. However, it's probably worth documenting that the derivative computation for adaptive nuggets treats the nugget as fixed in case anyone encounters this discrepancy.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant