You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi. Thanks for keeping everything updated here. I noticed there are some differences between the SNGP implementation here and what is described in the paper which leaves a few questions I am curious about:
In conjunction with the point above, there seems to be an extra multiplication which is not included in the paper equations in the predictive variance (https://github.com/google/edward2/blob/main/edward2/tensorflow/layers/random_feature.py#L456) which multiplies by the ridge penalty again after inversion. Was this used in the original experiments? How is this justified?
Regarding this comment: SNGP | Laplace RF Precision update inconsistent with the likelihood #258, it says that the current code used the Gaussian likelihood for simplification, but in the paper it seems to follow a one vs all logistic regression. Was a one-vs-all logistic regression used in the original training? or was it always a softmax even though the likelihood is logistic, or something else entirely?
The text was updated successfully, but these errors were encountered:
Sorry for bothering.
I noticed that in the SNGP paper, there are K precision matrices of size (B, B). However in the code there is only one. Is this corresponding to your third question? I'm new in uncertainty study, and this confused me about how to use the cov matrix.
Hi. Thanks for keeping everything updated here. I noticed there are some differences between the SNGP implementation here and what is described in the paper which leaves a few questions I am curious about:
1
instead of1e-3
as mentioned in table 5 of the paper. Is this a result of using the Gaussian likelihood instead of the logistic?The text was updated successfully, but these errors were encountered: