More examples #3
Comments
Yes, it should be quite easy to reuse it. You'd only need to copy over the C files and change the functions to accept tensors as arguments instead of parsing them out of the Lua state. Then just use the package example and that should be it. |
Awesome, thank you! I will let you know how things go. |
@apaszke I am trying to get the data from CudaTensor, I changed the example library to the following but that gives me seg fault:
I need similar operations for spatial transformer network to work with CUDA (cpu version already works). Can you share with me how to do this extraction? Thanks in advance. |
I guess my question is about how to reuse cuda code. When I attempted to do so, it tells me |
you cannot printf a cuda pointer, it will segfault. Maybe you can lightly read the CUDA programming guide: docs.nvidia.com/cuda/cuda-c-programming-guide |
Can't you just copy the code from the original repo? You shouldn't need to change any code that computes the function, only change the argument parsing. |
Thanks for your reply. @apaszke Yes, I finished the CPU version porting and it was quite intuitive. And I read the CUDA programming guide. But how can I build a .cu extension with extension-ffi? I am able to use some torch CUDA functions like For example, when I try to write on own add function, it gives me this error:
|
@fxia22 torch.utils.ffi doesn't appear to have any knowledge of nvcc or .cu. I think you need to build your cuda sources separately (see the example Makefiles that come with the CUDA SDK) and then add the built object(s) to 'extra_objects' through kwargs when creating an extension. See: |
@mattmacy Thanks, I will give it a shot! |
Hi pytorch team,
I am looking to port https://github.com/qassemoquab/stnbhwd to pytorch with effi. Do you know is it possible? Is the mechanism of writing extension for torch and pytorch similar or in other words, can I reuse some of the code from that repo. THanks.
The text was updated successfully, but these errors were encountered: