Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Define which GPU for inference? #1418

Open
RSly opened this issue Jan 26, 2017 · 8 comments · May be fixed by #1511
Open

Define which GPU for inference? #1418

RSly opened this issue Jan 26, 2017 · 8 comments · May be fixed by #1511

Comments

@RSly
Copy link

RSly commented Jan 26, 2017

Hi,

How can we define which GPU to use in inference?
(same way as we can do for training)

It seems that digits always tries to use GPU0 even in the case where GPU0 is busy but GPU1 is free.

This comes from the fact that the infer one image calls inference.py with argument --gpu=0, which is a bug since gpu 0 has no memory left (gpu1 is free)

how is it decided which gpu to be used for inference?

@lukeyeager
Copy link
Member

Which version of DIGITS are you using?

@RSly
Copy link
Author

RSly commented Jan 27, 2017

version 5.1-dev

I should precise that the gpu0 is busy by other frameworks such as tensorflow

can digits see that? I am not sure
if not it would be nice to get the GPU option in inference too

@lukeyeager
Copy link
Member

I should precise that the gpu0 is busy by other frameworks such as tensorflow

Ah, thanks for the clarification. No, DIGITS isn't aware of what other processes may be doing on the GPU. You should isolate GPUs for DIGITS with CUDA_VISIBLE_DEVICES=0,1 ./digits-devserver and for Tensorflow with CUDA_VISIBLE_DEVICES=2,3 python main.py, for example.

@RSly
Copy link
Author

RSly commented Jan 30, 2017

Thanks for the quick solution!

Still, the best would be to provide this option in the inference page to change the GPU on the fly.
Same feature exists in training :)

@rodrigoberriel
Copy link
Contributor

@lukeyeager I have an implementation of this that I can submit on a PR. but I don't know if the layout is okay for you. Could you please take a look at it?

If the user doesn't have more than one GPU, nothing will change:

image

But if the user has multiple GPUs, it looks like this:

image

where Next Available is the default choice and has the same behavior as the current implementation. If the user selects a GPU (only one can be selected), then this GPU will be used.

Do you think of any test I should do before submitting it? I did some tests and I've been using this for a while on our lab. Everything seems to work properly.

@rodrigoberriel rodrigoberriel linked a pull request Mar 15, 2017 that will close this issue
@ontheway16
Copy link

@rodrigoberriel Hello, will this make selection of GPU for inference from command line (curl.....) possible? As far as I know, you cannot decide for GPU at command line inference, no?

Thanks,

@RSly
Copy link
Author

RSly commented Aug 18, 2017

any news on this PR?

if in the long run the CPU option is also added it is even better :)

@ontheway16
Copy link

Wondering any updates on this? I still need inference applied by the first available gpu within digits environment but I think its locking inference to gpu0, in multigpu environments..?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants