You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The pytorch CUDA-enabled libraries are more capable than the CPU-only one. They can also run on the CPU if no CUDA device is available. However, due to a race condition, the current code base calls into the CUDA driver even if one passes --no-cuda as an argument.
Because Python's and is lazy. If the first option is false, than it doesn't evaluate the second one. If I pass --no-cuda, Python still runs torch.cuda.is_available, even though I specified that I do not want to use CUDA.
On devices that don't have CUDA drivers (or have the stub drivers) but have the CUDA version of PyTorch installed, this throws a runtime error. However, using the CPU on such devices is valid, since the PyTorch library can still function by using the CPU.
Ok, that makes sense, I thought that pytorch would be smart enough to not raise a runtime error for this function.
I would make a PR that makes your suggested change
Problem Description
The pytorch CUDA-enabled libraries are more capable than the CPU-only one. They can also run on the CPU if no CUDA device is available. However, due to a race condition, the current code base calls into the CUDA driver even if one passes
--no-cuda
as an argument.The issues are these lines of code:
cleanrl/cleanrl/dqn.py
Line 147 in 8cbca61
They should first check if the flag is set and only then call
torch.cuda.is_available
. That way, the program runs perfectly fine in those scenarios.Possible Solution
The text was updated successfully, but these errors were encountered: