Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hello, why is it displayed that unified cannot be recognized #2

Open
SC1114 opened this issue Mar 29, 2023 · 3 comments
Open

Hello, why is it displayed that unified cannot be recognized #2

SC1114 opened this issue Mar 29, 2023 · 3 comments

Comments

@SC1114
Copy link

SC1114 commented Mar 29, 2023

Using backend: pytorch
Process SpawnProcess-1:Traceback (most recent call last):File "/home/csarch/anaconda3/lib/python3.8/multiprocessing/process.py", line 315,in bootstrapself.run()
File "/home/csarch/anaconda3/lib/python3.8/multiprocessing/process.py", line 108, in runself. target(*self. args,**self. kwargs)File "/home/csarch/pytorch-direct/dgl/examples/pytorch/graphsage/train _sampling_pytorch direct.py", line 124, in producertrain nfeat = train nfeat.to(device="unified")RuntimeError: Expected one of cpu, cuda, xpu, mkldnn, opengl, opencl, ideep, hipmsnpu, mlc, xla, vulkan, meta, hpu device type at start of device string: unified

@K-Wu
Copy link
Owner

K-Wu commented Mar 29, 2023

Hello,

Could you please check if you have installed the modified PyTorch we provided as a submodule at https://github.com/K-Wu/pytorch-direct/tree/ec7bdb5389ed8c9724bf257267709e43bbb4325c? If you are not sure about that, please tell us what you see when you import PyTorch and print its version number in an interactive shell.

Thank you.

@SC1114
Copy link
Author

SC1114 commented Apr 3, 2023 via email

@K-Wu
Copy link
Owner

K-Wu commented Apr 4, 2023

If you are talking about the DGL UVA optimization since v0.8 detailed here https://github.com/dmlc/dgl/releases/tag/0.8.0, you need to refer to the DGL documentation because that was implemented from scratch and was independent from the prototype we made available here. I personally didn't have a chance to use that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants