Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Supporting multiple GPU models #10

Open
auroracramer opened this issue Nov 1, 2018 · 4 comments
Open

Supporting multiple GPU models #10

auroracramer opened this issue Nov 1, 2018 · 4 comments

Comments

@auroracramer
Copy link
Collaborator

Should supporting running the embedding models on multiple-GPUs be prioritized? Here are the pros/cons as I see it (not necessarily equally weighted in terms of importance):

Pros

  • Allows users to take advantage of multiple GPUs for faster running time

Cons

  • Adds an extra parameter to most API calls, though this can be optional
  • Adds meat to the codebase (though we already have it)
  • Can we test this on Travis?

All in all, I think that if we believe that using multiple GPUs will be a common use case, then we should include it. But if it's something that will be rarely used, if at all, we shouldn't prioritize it (at least for an MVP).

@auroracramer
Copy link
Collaborator Author

Another question that comes up if we support multi-GPU support is what the default number of GPUs used is. Should it be just 1? Or should it be the maximum number of GPUs available?

@justinsalamon
Copy link
Collaborator

I think supporting multiple GPU's would be nice, but also definitely not critical for the MVP.

If people have access to multiple GPUs they can always parallelize over the data to make use of all the GPUs, without the need to explicitly support multi-GPU inference.

Also - would this require saving additional model files, since keras multi-GPU model files are stored differently on disk?

@auroracramer
Copy link
Collaborator Author

Fair enough, let's not prioritize this for the MVP. Supporting multiple GPUs wouldn't require saving additional model files, we can make each model supported by multi-GPU inference after loading it

@justinsalamon
Copy link
Collaborator

I imagine we could support it via an n_gpu optional arg with a default value of 1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants