Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REQUEST] make nanoVDB CUDA async allocation optional so it can be used on vGPU #1798

Open
w0utert opened this issue Apr 24, 2024 · 1 comment

Comments

@w0utert
Copy link

w0utert commented Apr 24, 2024

Is your feature request related to a problem? Please describe.

Current nanoVDB implementation uses functions like cudaMallocAsync and cudaMemcpyAsync, for example in CudaDeviceBuffer when allocating or uploading data to the GPU. These functions are not available when using a vGPU that does not have unified memory enabled, which is common for example for GPU-enabled Azure VM's where the GPU is shared/sliced between multiple instances. Trying to run nanoVDB code on such a VM will result in CUDA 801 'not supported' exceptions.

Describe the solution you'd like

Projects such as PyTorch usually implement the async code paths using a switch to enable/disable them, plus a fallback path that uses synchronous functions. If nanoVDB had something similar, that would be the perfect solution, save for potential efficiency disadvantages the synchronous fallback paths could have.

Describe alternatives you've considered

For my situation there is not really an alternative, as I am not in a position to change hypervisor settings to enable unified memory support or use some other deployment target for the code I want to use with nanoVDB. The only option is to switch to a VM that uses passthrough GPU instead of vGPU, but again this is not something under my control.

@w0utert
Copy link
Author

w0utert commented Apr 25, 2024

Some more information/corrections:

  • This is only about cudaMallocAsync and cudaFreeAsync, not cudaMemcpyAsync etc.
  • Upon closer inspection of nanovdb/util/cuda/CudaUtils.h I found out there is already a fallback path when building with CUDA versions prior to 11.2 (the version that introduced async malloc functions)

Based on this, I created PR #1799 that introduces macros CUDA_MALLOC and CUDA_FREE, and a define NANOVDB_USE_SYNC_CUDA_MALLOC that can be set by the host build system to force synchronous CUDA allocations.

This has been verified to work on the vGPU deployment target I'm using.

@Idclip Idclip added the nanovdb label May 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants