-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
multi gpu #1
Comments
Hi, thank you for your interest in our work. Yes, you can use multiGPU with this implementation by allocating shared memory space and pinning it with the unified tensor. However, we are pushing our idea on DGL repository and some upgrades are coming soon dmlc/dgl#3616. So you can take a look at there as well! |
I think huge matrix multiplication is very basic, which is not only for graph, so we can implement it as a basic unit, not coupled with graph? |
Hi, yes the latter link is about the graph (if you need), but DGL supports unified tensor. Please see the document link here https://docs.dgl.ai/en/latest/api/python/dgl.contrib.UnifiedTensor.html. You simply need to declare the unified tensor on a shared memory space. |
Some quick examples (may need some syntax corrections): For the single GPU case:
For the multi GPU case:
Hope this helps! |
Can we use this code with multi gpus? if so, give some examples in readme? thx~
say, there are 1 billion nodes and 60 billion edges,
so the matrix will be 500G while A100 has 80G memory.
The text was updated successfully, but these errors were encountered: