Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to deploy to end user? / allow cpu fallback for gpu version? #39

Open
nonchip opened this issue Nov 4, 2018 · 1 comment
Open
Labels

Comments

@nonchip
Copy link

nonchip commented Nov 4, 2018

when making a packaged build for end user release we can't really predict whether or not they have a compatible gpu, so how would I go about a procedure like:

  • detect gpu compatibility
  • if compatible, (ask them to; or automatically) install the cuda/cudnn dependencies
  • or fall back to the cpu version

the first one would be rather easy (check hardware info and show links to or provide the installers), but would i have to package 2 builds for the different versions, or could the plugin be made to automatically fall back?

EDIT: i guess this would also require multiplatform support which as far as i can see isn't provided yet? so I'd probably be better off not using this plugin directly and rather either provide a "cloud" solution or custom external worker started by the game that then gets connected to using the normal ue4 tcp socket stuff?

@getnamo
Copy link
Owner

getnamo commented Nov 5, 2018

My layman's understanding of cuda/cudnn dll distribution is that it requires click through consent for the users or they need to download load it from source (requires Dev registration). Also given that cuda is Nvidia only this will lock out part of your users from higher performance.

With that in mind, if your model can run inference using CPU at acceptable speed, the easiest approach is to use that version. If on the other hand you do require GPU level performance, you'll need to figure out how to properly distribute cuda/cudnn dlls or as you suggested, use a cloud backend.

There are certainly ways to detect available GPUs and cross reference/enable capabilities and just ask pip to download the correct tensorflow, but that won't solve the distribution issue. This plugin can't help you with that issue, only clarification/distribution from Nvidia can.

See https://docs.nvidia.com/deeplearning/sdk/cudnn-sla/index.html#distribution for details.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants