You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For PyTorch vs Torch-TensorRT compatibility, the versions are aligned, so PyTorch v2.2.0 <-> Torch-TensorRT v2.2.0 (prior to PyTorch 2.0, it would be something like PyTorch 1.13 <-> Torch-TensorRT 1.3.0). For driver compatibility this is based on CUDA https://docs.nvidia.com/deploy/cuda-compatibility/index.html. So if your PyTorch build targets CUDA 11.8 you need >= 450.80.02. If you are using a 12.1 PyTorch then you need to use >=525.60.13. NVIDIA-SMI can help you determine if your CUDA and CUDA-Driver are aligned.
Bug Description
hi i see the following error - it looks like the torch.compile worked fine but when i invoke the prediction after that it errors out:
does pytorch-tensorrt work with a g4dn.xlarge? why i get this:
CUDA initialization failure with error: 35
?full log:
tensorrt_torch_error.txt
To Reproduce
Steps to reproduce the behavior:
how was the model compiled?
to rule out that the issue is somewhere else - i tested with the following torch.compile - this works fine:
should i try some other settings for torch.compile(model.model_body[0].auto_model, backend="torch_tensorrt" ?
could the error be related to NVIDIA/TensorRT#308 ?
Expected behavior
no error
Environment
conda
,pip
,libtorch
, source):Additional context
The text was updated successfully, but these errors were encountered: