Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to customize usage of GPU #23

Open
Nelsonvon opened this issue Jul 7, 2021 · 2 comments
Open

How to customize usage of GPU #23

Nelsonvon opened this issue Jul 7, 2021 · 2 comments

Comments

@Nelsonvon
Copy link

Hi

The computation will always run on the first graphic card (cuda:0). Is there anyway to customize which card to used?

Besides I met error while simulating wav in a pytorch Dataloader with multi sub-processes (num_workers > 0). The processing breaks and return initialization error of gpuRIR. Does anyone notice this problem and know how to solve it?

thx

Best regards,
Nelson

@DavidDiazGuerra
Copy link
Owner

Hi Nelson,

At this time, the library doesn't include the option to choose the GPU. It would be a nice feature to add in the future, but I have neither the time to implement it right now nor a multi-GPU machine to test it.

About the Pytorch Dataloader with multi sub-processes, I haven't used gpuRIR in that context, but I think muli sub-processes are typically used when the Dataloader runs in CPU so you can generate your batch in CPU while the neural network runs in the GPU. Could you be running out of GPU memory?

Best regards,
David

@YaguangGong
Copy link

I also encountered the Dataloader issue. It seems to be caused by the default start method of torch.multiprocessing.
The CUDA runtime does not support the fork start method. Just use torch.multiprocessing.set_start_method() to switch from fork to spawn or forkserver. Here is the link.
https://pytorch.org/docs/stable/notes/multiprocessing.html?highlight=set_start_method

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants