New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inconsistency between torch.clamp() and numpy.clip() behavior for complex numbers #33568
Comments
I think I mentioned this to @dylanbespalko when I reviewed his PR that added clamp support; I have no idea what this is actually supposed to do here :> |
I have an idea. I don't agree with what the def clamp(complex a, complex min, complex max):
# same as np.clip, choose the complex number with the smallest absolute value.
return max(min(a, max), min)
def clamp_abs(complex a, real min, real max):
# Maintain direction, but control the speed of the plane during landing.
mag = max(min(abs(a), abs(max)), abs(min))
return mag * exp(1j * a.angle())
def clamp_angle(complex a, real min, real max):
# Maintain speed, but clamp the angle so the spaceship doesn't burn-up on re-entry.
phase = max(min(angle(a), abs(max)), abs(min))
return abs(a) * exp(1j * phase) This way |
@rgommers I wonder if you have an opinion here |
The behavior of
So that's not something to emulate exactly. I'll open an issue on the NumPy issue tracker. Reasonable behavior could be one of:
It may be worth waiting on the outcome of the NumPy discussion before doing anything here, given that this doesn't look urgent to fix. |
@rgommers @anjali411
It seems like numpy folks were looking to implement:
Before I get started, do we agree that we should take this route, or should we continue to wait on numpy discussion? If we want to wait on numpy decision, can someone suggest another issue for me to pick up? |
I think the NumPy discussion is done to a large enough extent I think, that issue went through triage review and there's consensus that clipping real and imaginary parts separately is the most reasonable thing to do. So please go ahead I'd say. |
Hey ya'll, I'd like to get some feedback and suggestions on my understanding thus far. I found file There are a few issues going on here: Issue 1a=torch.tensor([3+4j])
torch.clamp(a, 1+5j, 2)
# Actual
tensor([(1.0000 + 5.0000j)], dtype=torch.complex128)
# Expected
tensor([(2.0000 + 0.0000j)], dtype=torch.complex128) How to fix: I am not sure and could use some guidance. Issue 2a=torch.tensor([3+4j])
torch.clamp(a, 1+5j, 2+6j) # Note 6j > 4j
# Presumed result given issue 1 fixed
tensor([(2.0000 + 6.0000j)], dtype=torch.complex128)
# Expected
tensor([(2.0000 + 4.0000j)], dtype=torch.complex128) How to fix: separate real and imag parts of tensor and perform clamp on each using real and imag values of min and max. Function in numpy looks like def cx_clip(self, a, m, M, out=None):
# use slow-clip for complex case
return (self.clip(a.real, m.real, M.real)
+ (1.0j * (self.clip(a.imag, m.imag, M.imag)))) In addition, I'm happy to say this will be my first time writing C++ code. Are there any guides I can follow that would help me understand how to test code with short feedback loops? Very basic example with explicit instructions would be great! Super excited to contribute and thank you for helping me! |
In #35563, I recently gave @kostekIV some advice on how to add a Beyond this, my opinion for complex clamp ops:
I have used @rgommers is a numpy developer, and he said they agreed to make You should submit a PR for step 1, and another PR for step 2 & 3 because we want to be backwards compatible with numpy. |
The clamp kernel is implemented in aten/src/ATen/native/cpu/UnaryOpsKernel.cpp. In this file you will see functions that call
Hence, when you test the code, you need to choose a tensor length (like 9) that will call both functions. Here are some other links: |
I think we should just leave this disabled for now, as in 36373. The current NumPy behavior (clamp the real) doesn't make a lot of sense, but it's at least consistent with the definition of min/max, meaning it just sorts according to the definition of min/max and then applies the min/max clamps to the low and high ends of ranges. The new suggested NumPy behavior (clamp the components) also doesn't make a lot of sense, and it's inconsistent with the behavior of min/max (which doesn't look at the complex component). As pointed out in numpy/numpy#15630, we can't point to a single person who wants any of this. Also, both the current and new behavior are trivially expressible in numpy by accessing The other logical choice (absmax / absmin / absclamp) may justify a different op or the ability to pass an evaluation function (complex->real) for performance reasons if that's what people want. So, I'd recommend just disabling all of this for now until we get actual requests. |
Just my $.02, but I currently use torch::clamp on "complex" tensors of shape [samples, 2] to simulate clipping on a pair of independent ADCs such as in quadrature sampling. In this application I would expect that a float min or max would be applied to the real and imag channels independently. I don't know of an application of a complex clamp value, but I don't think the real clamp value is operating on the channels independently and it's not doing whatever numpy is doing (also not operating independently). The clamp angle and abs make sense, but I don't know of an immediate application for them |
Let's move this discussion to #36444. |
Summary: This partially addresses #33568 by disabling clamp for complex inputs until an appropriate solution can be implemented. test_complex_unsupported in test_torch.py is extended to validate this behavior. Pull Request resolved: #36373 Differential Revision: D20984435 Pulled By: mruberry fbshipit-source-id: 49fd2e1e3a309f6a948585023953bae7ce3734c8
Summary: This partially addresses pytorch#33568 by disabling clamp for complex inputs until an appropriate solution can be implemented. test_complex_unsupported in test_torch.py is extended to validate this behavior. Pull Request resolved: pytorch#36373 Differential Revision: D20984435 Pulled By: mruberry fbshipit-source-id: 49fd2e1e3a309f6a948585023953bae7ce3734c8
Per discussion on this issue and #36444, it seems there's a consensus on disabling complex clamp (clip) in PyTorch. Disabling lets us choose an implementation later, if we like, and avoid user confusion. The multiple valid complex clamp implementations suggest that users will be unable to anticipate what clamp does. Helpful behaviors, like those @n-west mentions, can also be implemented with new, more explicit functions that clarify users' intent. Maybe torch.absclamp? |
cc @ezyang @anjali411 @dylanbespalko
The text was updated successfully, but these errors were encountered: