Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Torchscript support? #21

Open
StephenHogg opened this issue Aug 2, 2021 · 6 comments
Open

Torchscript support? #21

StephenHogg opened this issue Aug 2, 2021 · 6 comments
Labels
enhancement New feature or request

Comments

@StephenHogg
Copy link

Hi Ivan,

Thanks so much for making this package, it's really well-developed. As things stand however, models created using it can't be converted to Torchscript. It may or may not be a problem to solve: please find attached an example of an error I got. It's noticeable that Torchscript is reasonably happy, there are just some functions that are defined with kwargs etc. Is there any chance you'd consider supporting this? It would be amazing to be able to use this in a compiled environment. I hope this isn't a bother.

Regards,

Stephen
cplxtest.txt - rename to cplxtest.ipynb

@ivannz ivannz added the enhancement New feature or request label Aug 2, 2021
@ivannz
Copy link
Owner

ivannz commented Aug 2, 2021

@StephenHogg Currently cplxmodule has ONNX support for compiling models in inference or training modes, and i agree jit.script would be useful in the long term. However, currently, the API of Cplx is what is blocking jit.script support.

When implementing the Cplx object i decided to duck-type it to be syntactically compatible with torch.Tensor, so that it would be convenient to work with for a user, who is familiar with torch. Hence, Cplx.view (reshape), Cplx.empty (zeros, ones), Cplx.permute, Cplx.size, and Cplx.to all have variable positional arguments (also module level functions cplx.randn and cplx.einsum use varargs). Thus, making the cplx.Cplx object jit.script-compatible would require significant changes to the API.

Now, it seems the torch.Tensor equivalents of these functions are allowed to have variable arguments, because the Tensor object is implemented in C++. Perhaps, rewriting the Cplx as a torch-c extension might circumvent the problem, but i am not sure atm and can't put an estimate on how long it would take, since i am not an experienced С++ programmer and yet unfamiliar with writing extensions for torch.

@pfeatherstone
Copy link

torch.Tensor supports complex type now. Can't we just use that ?

@ivannz
Copy link
Owner

ivannz commented Aug 2, 2021

@pfeatherstone A user of cplxmodule informed me, that linear operations with native torch.complex are slower than their analogues from cplxmodule, and we agreed that it might be due to torch's use of array-of-structures approach, wherein the real and imaginary parts of a complex number are kept next to each other. cplx.Cplx, on the other hand, uses the structure-of-arrays design, which keeps real and imaginary parts in separate, sort of independent real-valued tensors.

I chose this design in order to build the module atop the existing framework, and it makes sense that batched basic linear algebra operations and convolutions could be faster when real and imaginary parts are kept as separate real-valued tensors (on CPU SIMD instructions use operate on reals in fp). At the same time torch supports complex-valued LAPACK functions (SVD, eingen, etc), which cplxmodule doesn't, because it has not been its purpose, and, besides, we've got torch for that now :).

I envisioned the transition to torch.complex data type back in late 2019, when i was designing the duck-typed cplx.Cplx object, so it shouldn't be that har. I was planning on slowly rewriting the code of cplxmodule.nn (and relevance submodules), and even reworked the cplxmodule.utils.specturm (torch-1.8.0 branch), but that feedback made me have second thoughts. I think it would be useful to have some benchmark results to decide if upgrading is worth it.

@pfeatherstone
Copy link

hmm. I can see your point, at least if it is in fact true that cplx.Cplx is faster than torch.Tensor when dtype == torch.complex128. But seems weird having a type that is doing the same thing as something that is natively supported. It's like duplicating a feature that is already present in the standard library of some programming language. It seems a bit weird. But people do do that if the performance gains justify it. I mean, so long as stuff can be onnx-exported with dynamic input shapes, then i don't care how it's done. At the end of the day, onnxruntime will give you performance gains for free anyway... (yep, onnxruntime is definitely faster than torch)

@StephenHogg
Copy link
Author

hmm. I can see your point, at least if it is in fact true that cplx.Cplx is faster than torch.Tensor when dtype == torch.complex128. But seems weird having a type that is doing the same thing as something that is natively supported. It's like duplicating a feature that is already present in the standard library of some programming language. It seems a bit weird. But people do do that if the performance gains justify it. I mean, so long as stuff can be onnx-exported with dynamic input shapes, then i don't care how it's done. At the end of the day, onnxruntime will give you performance gains for free anyway... (yep, onnxruntime is definitely faster than torch)

I can see why you say that, I guess the question I originally asked was about torchscript support. My preference would absolutely to be to work with torch.Tensor and derivatives, if that's worth knowing, because of the benefits to verifying your code if it's built on something that already has a large testing suite and the ability to use any current (such as torchscript) and potential future improvements in the torch ecosystem by colouring inside the lines, as it were.

In any event I'm very grateful that you're both giving this consideration - if you just want to stick to onnx export only then I'm happy to just re-write the parts of the package I need using torch.Tensor given that I don't have complete freedom in terms of runtimes. Otherwise, if there's an issue or two that could be opened to do the migration (which may not be that hard?), I'd be happy to help with that.

@pfeatherstone
Copy link

As an aside, I don't think pytorch complex tensors are ONNX-exportable at the moment. Even torch.view_as_complex() failed to export for me, even with ONNX opset 13 and using nightly pytorch. So until ONNX-export works, I think cplx.Cplx is probs still best. (torch.jit works fine though)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants