Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minimum Cuda capability is 3.5? But, 3.0 stated on site #17445

Closed
tharindu-mathew opened this issue Mar 5, 2018 · 8 comments
Closed

Minimum Cuda capability is 3.5? But, 3.0 stated on site #17445

tharindu-mathew opened this issue Mar 5, 2018 · 8 comments
Assignees
Labels
stat:awaiting response Status - Awaiting response from author type:build/install Build and install issues

Comments

@tharindu-mathew
Copy link

I'm able to run the hello world examples, but the following warning (or error) is printed. So, I can't make use of my gpu? While this maybe a simple correction on the web page, is there anyway I can get a version that allows me to run with a Cuda 3.0 card?

OS: Ubuntu 16.04
GPU: K2000M

On the linux installation page, the minimum capability is written as 3.0. But, when I try to run hello world on a cuda 3.0 card, the following is printed:

name: Quadro K2000M major: 3 minor: 0 memoryClockRate(GHz): 0.745
pciBusID: 0000:01:00.0
totalMemory: 1.95GiB freeMemory: 977.81MiB
2018-03-05 13:43:54.533246: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1283] Ignoring visible gpu device (device: 0, name: Quadro K2000M, pci bus id: 0000:01:00.0, compute capability: 3.0) with Cuda compute capability 3.0. The minimum required Cuda capability is 3.5.

@yongtang
Copy link
Member

yongtang commented Mar 5, 2018

I think minimal capability is 3.5 for binary install. It might still be possible to support 3.0 when building from source.

Create a PR #17448 for the doc fix.

@tharindu-mathew
Copy link
Author

Ok, is this a regression? As per this thread this should be available?#25

@rohan100jain rohan100jain added stat:awaiting response Status - Awaiting response from author type:build/install Build and install issues labels Apr 4, 2018
@rohan100jain
Copy link
Member

I think in the thread its mentioned that you might have to change some lines in common_runtime/gpu/gpu_device.cc for this to work. By default minimum is 3.5

@tharindu-mathew
Copy link
Author

It can be compiled from source with CUDA capability 3.0. It is working for me this way.

@JoshuaC3
Copy link

@MackieM did you have to make the changes suggested by @rohan100jain before building from source?

@tharindu-mathew
Copy link
Author

tharindu-mathew commented May 10, 2018 via email

@pabx06
Copy link

pabx06 commented May 12, 2018

Hello i have a CUDA Compute Capability of 2.1 is there a chance i am able to change the source code to support my GPU lower capability? Or the tensorflow's algo need a specific capability ?

Cause CPU take way too long: been training the model for more than 48h so far on CPU
Likely i will be dying from old age before i can evaluate the inception model for my needs

  • 19%-30% validation accuracy
  • 1.5 m training step
  • model : inceptions-v3 transfert learning & retraining of classification layer
  • dataset 6GB
  • 130 classes, ...
  • GPU:Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz

@tharindu-mathew
Copy link
Author

Quite sure, 2.1 is too old for a lot of functionality and acceleration. Sorry.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stat:awaiting response Status - Awaiting response from author type:build/install Build and install issues
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants