You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Using backend: pytorch
Process SpawnProcess-1:Traceback (most recent call last):File "/home/csarch/anaconda3/lib/python3.8/multiprocessing/process.py", line 315,in bootstrapself.run()
File "/home/csarch/anaconda3/lib/python3.8/multiprocessing/process.py", line 108, in runself. target(*self. args,**self. kwargs)File "/home/csarch/pytorch-direct/dgl/examples/pytorch/graphsage/train _sampling_pytorch direct.py", line 124, in producertrain nfeat = train nfeat.to(device="unified")RuntimeError: Expected one of cpu, cuda, xpu, mkldnn, opengl, opencl, ideep, hipmsnpu, mlc, xla, vulkan, meta, hpu device type at start of device string: unified
The text was updated successfully, but these errors were encountered:
Hi, I saw that can use the UnifiedTensor by using uva for the --graph-device (for the graph structure like CSR) and --data-device (for the node feature tensor) arguments. . What is the specific operation? Can you provide an example?Thank you so much!
If you are talking about the DGL UVA optimization since v0.8 detailed here https://github.com/dmlc/dgl/releases/tag/0.8.0, you need to refer to the DGL documentation because that was implemented from scratch and was independent from the prototype we made available here. I personally didn't have a chance to use that.
Using backend: pytorch
Process SpawnProcess-1:Traceback (most recent call last):File "/home/csarch/anaconda3/lib/python3.8/multiprocessing/process.py", line 315,in bootstrapself.run()
File "/home/csarch/anaconda3/lib/python3.8/multiprocessing/process.py", line 108, in runself. target(*self. args,**self. kwargs)File "/home/csarch/pytorch-direct/dgl/examples/pytorch/graphsage/train _sampling_pytorch direct.py", line 124, in producertrain nfeat = train nfeat.to(device="unified")RuntimeError: Expected one of cpu, cuda, xpu, mkldnn, opengl, opencl, ideep, hipmsnpu, mlc, xla, vulkan, meta, hpu device type at start of device string: unified
The text was updated successfully, but these errors were encountered: