Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Any way to deal with this? #2188

Closed
aymuos15 opened this issue May 15, 2024 · 2 comments
Closed

Any way to deal with this? #2188

aymuos15 opened this issue May 15, 2024 · 2 comments
Assignees

Comments

@aymuos15
Copy link

RuntimeError: Found NVIDIA GeForce GTX 1080 Ti which is too old to be supported by the triton GPU compiler, which is used as the backend. Triton only supports devices of CUDA Capability >= 7.0, but your device is of CUDA capability 6.1

I am aware of this: https://discuss.pytorch.org/t/torch-compile-triton-cuda-capability/182068

Was hoping there would be an easy way to adapt the codebase?

@FabianIsensee FabianIsensee self-assigned this May 15, 2024
@NickShargan
Copy link

NickShargan commented May 21, 2024

I've just encountered this issue with GTX 1080ti. Quick fix is to suppress this error by adding a variable to your environment, e.g.:
export TORCHDYNAMO_DISABLE=1

A much better solution is to switch to a newer GPU with Triton support. With GTX 4060ti, I've got 2+x increase in speed for training.

@FabianIsensee
Copy link
Member

Easiest way to deal with this is disable torch.compile
Just use
nnUNet_compile=f nnUNetv2_train etc.
Or export nnUNet_compile=f as environment variable

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants