You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To avoid dividing by very small numbers, a lot of methods in the lens classes do something like this:
th = (x ** 2 + y ** 2).sqrt() + self.s
where self.s is a softening parameter, with typical default of 0.001. This is different than lenstronomy, which does this instead:
th = np.maximum(th, self.s)
Our current approach shifts all radial coordinates by self.s, not just coordinates close to the lens center which would lead to numerical instabilities. This biases our implementation with respect to lenstronomy by a global rescaling of our coordinates.
I suggest that we use torch.maximum(th, self.s) as a safeguard against very small radii.
The text was updated successfully, but these errors were encountered:
To avoid dividing by very small numbers, a lot of methods in the lens classes do something like this:
th = (x ** 2 + y ** 2).sqrt() + self.s
where
self.s
is a softening parameter, with typical default of0.001
. This is different thanlenstronomy
, which does this instead:th = np.maximum(th, self.s)
Our current approach shifts all radial coordinates by
self.s
, not just coordinates close to the lens center which would lead to numerical instabilities. This biases our implementation with respect tolenstronomy
by a global rescaling of our coordinates.I suggest that we use
torch.maximum(th, self.s)
as a safeguard against very small radii.The text was updated successfully, but these errors were encountered: