Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] Running each trial on a different GPU? #258

Open
thiagoribeirodamotta opened this issue Oct 22, 2019 · 2 comments
Open

[Question] Running each trial on a different GPU? #258

thiagoribeirodamotta opened this issue Oct 22, 2019 · 2 comments

Comments

@thiagoribeirodamotta
Copy link

Is it possible, in a multiple GPU scenario, to have each available GPU doing a separate trial? So far it seems that using multi_gpu_model is not accelerating our computer vision deep learning model (U-net / Mask RCNN), so having each trial running on a separate GPU could provide us with great speedups, but I've found no information on the matter.

Thank you.

@maxpumperla
Copy link
Owner

This is something we would have to raise in hyperopt itself. It's not a simple matter, but very interesting. certainly doesn't just happen out of the box

@JonnoFTW
Copy link
Collaborator

JonnoFTW commented Nov 27, 2019

The simplest path to getting this to work would be to use the GPU identifier as a custom hyperparameter that always returns the next value in a list using itertools.cycle(GPU_IDS). From there you'd use mongoworker and make sure there's only ever len(GPU_IDS) concurrent workers.

Something like:

import tensorflow as tf
with tf.device({{hp.cycle(['/gpu:0','/gpu:1'])}}):
    ...

I'm not sure the impact this would have on TPE though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants