-
Notifications
You must be signed in to change notification settings - Fork 74k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Minimum Cuda capability is 3.5? But, 3.0 stated on site #17445
Comments
I think minimal capability is 3.5 for binary install. It might still be possible to support 3.0 when building from source. Create a PR #17448 for the doc fix. |
Ok, is this a regression? As per this thread this should be available?#25 |
I think in the thread its mentioned that you might have to change some lines in common_runtime/gpu/gpu_device.cc for this to work. By default minimum is 3.5 |
It can be compiled from source with CUDA capability 3.0. It is working for me this way. |
@MackieM did you have to make the changes suggested by @rohan100jain before building from source? |
No. I believe basel configured it for me without having to touch the source.
…On Thu, May 10, 2018, 7:13 PM JoshuaC3 ***@***.***> wrote:
@MackieM <https://github.com/mackiem> did you have to make the changes
suggested by @rohan100jain <https://github.com/rohan100jain> before
building from source?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#17445 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABsR4TTovkhJvWb2wHx-bYL8xrKj5KVIks5txMmngaJpZM4Sc1Yr>
.
|
Hello i have a CUDA Compute Capability of 2.1 is there a chance i am able to change the source code to support my GPU lower capability? Or the tensorflow's algo need a specific capability ? Cause CPU take way too long: been training the model for more than 48h so far on CPU
|
Quite sure, 2.1 is too old for a lot of functionality and acceleration. Sorry. |
I'm able to run the hello world examples, but the following warning (or error) is printed. So, I can't make use of my gpu? While this maybe a simple correction on the web page, is there anyway I can get a version that allows me to run with a Cuda 3.0 card?
OS: Ubuntu 16.04
GPU: K2000M
On the linux installation page, the minimum capability is written as 3.0. But, when I try to run hello world on a cuda 3.0 card, the following is printed:
name: Quadro K2000M major: 3 minor: 0 memoryClockRate(GHz): 0.745
pciBusID: 0000:01:00.0
totalMemory: 1.95GiB freeMemory: 977.81MiB
2018-03-05 13:43:54.533246: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1283] Ignoring visible gpu device (device: 0, name: Quadro K2000M, pci bus id: 0000:01:00.0, compute capability: 3.0) with Cuda compute capability 3.0. The minimum required Cuda capability is 3.5.
The text was updated successfully, but these errors were encountered: