You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been following #4663 and I'm seeing something similar but cannot figure out why. I can access my gpu on device 0 using nvidia-smi and I can access it using the same conda environment with pytorch so I'm unclear why dali cannot. This is inside a conda environment inside wsl on windows
Minimal example above gets error:
python dali_test.py
/root/miniconda3/envs/multilabelimage_model_env/lib/python3.11/site-packages/nvidia/dali/backend.py:99: Warning: nvidia-dali-cuda120 is no longer shipped with CUDA runtime. You need to install it separately. cuFFT is typically provided with CUDA Toolkit installation or an appropriate wheel. Please check https://docs.nvidia.com/cuda/cuda-quick-start-guide/index.html#pip-wheels-installation-linux for the reference.
deprecation_warning(
/root/miniconda3/envs/multilabelimage_model_env/lib/python3.11/site-packages/nvidia/dali/backend.py:110: Warning: nvidia-dali-cuda120 is no longer shipped with CUDA runtime. You need to install it separately. NPP is typically provided with CUDA Toolkit installation or an appropriate wheel. Please check https://docs.nvidia.com/cuda/cuda-quick-start-guide/index.html#pip-wheels-installation-linux for the reference.
deprecation_warning(
/root/miniconda3/envs/multilabelimage_model_env/lib/python3.11/site-packages/nvidia/dali/backend.py:121: Warning: nvidia-dali-cuda120 is no longer shipped with CUDA runtime. You need to install it separately. nvJPEG is typically provided with CUDA Toolkit installation or an appropriate wheel. Please check https://docs.nvidia.com/cuda/cuda-quick-start-guide/index.html#pip-wheels-installation-linux for the reference.
deprecation_warning(
Traceback (most recent call last):
File "/mnt/c/Coding/Testing/PyTorch/MultiLabelClassification_Patreon/actual_real_user_code/dali_test.py", line 8, in<module>pipe.build()
File "/root/miniconda3/envs/multilabelimage_model_env/lib/python3.11/site-packages/nvidia/dali/pipeline.py", line 979, in build
self._init_pipeline_backend()
File "/root/miniconda3/envs/multilabelimage_model_env/lib/python3.11/site-packages/nvidia/dali/pipeline.py", line 813, in _init_pipeline_backend
self._pipe = b.Pipeline(
^^^^^^^^^^^
RuntimeError: CUDA runtime API error cudaErrorInvalidDevice (101):
invalid device ordinal
Other/Misc.
Found similar issues but could not find a solution
Check for duplicates
I have searched the open bugs/issues and have found no duplicates for this bug report
The text was updated successfully, but these errors were encountered:
Version
nvidia-dali-cuda120:1.37.1, nvidia-dali-nightly-cuda120 1.38.0.dev20240507
Describe the bug.
I've been following #4663 and I'm seeing something similar but cannot figure out why. I can access my gpu on device 0 using nvidia-smi and I can access it using the same conda environment with pytorch so I'm unclear why dali cannot. This is inside a conda environment inside wsl on windows
Minimum reproducible example
Relevant log output
Other/Misc.
Found similar issues but could not find a solution
Check for duplicates
The text was updated successfully, but these errors were encountered: