Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What is the expected speedup when using OpenCL? #123

Open
gwiesenekker opened this issue Sep 3, 2023 · 1 comment
Open

What is the expected speedup when using OpenCL? #123

gwiesenekker opened this issue Sep 3, 2023 · 1 comment
Assignees
Labels
question Further information is requested

Comments

@gwiesenekker
Copy link

No description provided.

@joaopauloschuler joaopauloschuler self-assigned this Sep 4, 2023
@joaopauloschuler joaopauloschuler added the question Further information is requested label Sep 4, 2023
@joaopauloschuler
Copy link
Owner

This is an excellent question that opens up plenty of passionate debate. When I am renting hardware to run my own models, I prefer renting CPU only hardware without GPU. The motivations are: AVX capable CPUs are cheap and I don't risk exceeding VRAM.

Before you consider me crazy, I recommend having a look at:

To reply to your question, small non convolutional models might be slower on GPU. I would use GPU only on bigger models with convolutions. I would expect improvement from 2x to 8x in convolutional models using GPU. My own models are trained on CPU only environments because I have found better price x performance on CPU.

Depending on where I am renting hardware, I can get 20 CPU cores for the cost of 1 GPU. Anyway, one model can be price effective on GPU and then the next model may not be. It's a moving target.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants