New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for MPS devices (i.e. the GPU on Mac) when using PyTorch backend #42
Comments
I have looked into this a bit more deeply, and it is slightly more complex than expected. I have managed to get it working somewhat, but there are some complications:
Given these two small changes, the forward pass of the model seems to work, and indeed gives a big speedup over the CPU. However there is an additional problem, the evaluation of the likelihood in the neural processes package requires the use of |
Hi @magnusross, thank you for opening this issue. It would be great if DeepSensor users could capitalise on MPS devices. I can't handle this myself because I don't have a Mac, so I appreciate your efforts to get this working. As you've realised, the backend agnosticism of DeepSensor is enabled by @wesselb's Regarding the issue of whether we can evaluate the log-likelihood
|
Hey @magnusross and @tom-andersson! Apologies for the delay on my part. I've just come back from a holiday and am still catching up on all email. @magnusross, I will look at your PR for @tom-andersson, to answer your questions:
|
Thanks @wesselb, perhaps it will indeed just work once to allow Once the two changes are made in |
Yes, I should be able to merge @magnusross’s PR and allow single precision on the relatively short term. :) Will keep you updated! |
Thanks for your help both! @wesselb if you need me to help in any way r.e. the PR on |
@tom-andersson, I've added a keyword argument @magnusross, I've left some comments on your PR. I think that's basically ready to go, pending a unit test. :) |
Excellent, thanks @wesselb! @magnusross, if this gets MPS support working on your side then we can bump the |
Hey both, I've been looking at this briefly this morning and have unfortunately run into another problem. I thought before that I was running the full forward pass, but I was actually just running the encoding. Unfortunately now when I run the forward I get:
I'm not sure exactly what's causing this but I'll try look into it later this week or next week, I'm a bit busy atm, sorry this is taking some time. Made a PR (#49) so I'll add stuff to that if I find what's causing it, if either of you have any ideas they'd be very welcome! |
Hey @magnusross, I'm afraid I've never come across this error before... Don't worry if this takes you a while to dig into though. |
@magnusross, @tom-andersson, my suspicion is that it might take some time before all advanced convolution operations are supported by MPS. Namely, convolutions are implemented with highly optimised GPU kernels, and these will need to be ported to MPS. You might be able to run a forward pass with a simpler convolutional architecture. |
Currently it seems that only cuda devices are supported to use as GPUs when using PyTorch backend. It would be nice to also be able to use the MPS device that is now supported by PyTorch for acceleration on Mac.
I guess a good use case for this is for demoing the package on a laptop, which would avoid the need to connect to a cluster just to try the models out, but would still provide enough compute to run more interesting models than simple toy examples in a reasonable time.
I think it should be a reasonably straightforward change but I am quite inexperienced with the backend stuff, so maybe it's quite complex!
The text was updated successfully, but these errors were encountered: