You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
RuntimeError: Found NVIDIA GeForce GTX 1080 Ti which is too old to be supported by the triton GPU compiler, which is used as the backend. Triton only supports devices of CUDA Capability >= 7.0, but your device is of CUDA capability 6.1
I've just encountered this issue with GTX 1080ti. Quick fix is to suppress this error by adding a variable to your environment, e.g.: export TORCHDYNAMO_DISABLE=1
A much better solution is to switch to a newer GPU with Triton support. With GTX 4060ti, I've got 2+x increase in speed for training.
Easiest way to deal with this is disable torch.compile
Just use nnUNet_compile=f nnUNetv2_train etc.
Or export nnUNet_compile=f as environment variable
RuntimeError: Found NVIDIA GeForce GTX 1080 Ti which is too old to be supported by the triton GPU compiler, which is used as the backend. Triton only supports devices of CUDA Capability >= 7.0, but your device is of CUDA capability 6.1
I am aware of this: https://discuss.pytorch.org/t/torch-compile-triton-cuda-capability/182068
Was hoping there would be an easy way to adapt the codebase?
The text was updated successfully, but these errors were encountered: