Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Return zero gradient for zero norm function. #393

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

bewantbe
Copy link

@bewantbe bewantbe commented May 3, 2018

Fix issues #370

Now the gradient at zero (origin) point of np.linalg.norm() is the same as np.abs, which is zero, one of its subgradient.
For second order gradients, mathematically they should be +infinity, but
here when ord>=2 it returns 0 (same as np.abs()), when 1<ord<2, it is
NaN with plenty of warnings, which should be enough to prevent user from
doing that.

Also note that when x is complex number, the gradient of the norm seems wrong according to your document, there should be a conj(x) in the expression, see also the comments in the committed code.

Fix issues HIPS#370

Now the gradient at zero (origin) point of np.linalg.norm() is the same as np.abs, which is zero, one of its subgradient.
For second order gradients, mathematically they should be +infinity, but
here when ord>=2 it returns 0 (same as np.abs()), when 1<ord<2, it is
NaN with plenty of warnings, which should be enough to prevent user from
doing that.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant