You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a VIT model for object detection. The model inference speed in the tensort 8.5 environment is 190ms per frame. However when I updated to TensorRT 9.3, Inference slowed down to 250ms per frame.
I acquired the C++ dynamic library by compiling the latest Torch-TensorRT source code.
What might be causing this issue?
Environment
Build information about Torch-TensorRT can be found by turning on debug messages
Libtorch Version (e.g., 1.0): 2.2.1
CPU Architecture:
OS (e.g., Linux): ubuntu22.04
How you installed PyTorch (conda, pip, libtorch, source):
Build command you used (if compiling from source):
Are you using local sources or building from archives: Yes
Python version:
CUDA version: 12.2
GPU models and configuration:
Any other relevant information:
The text was updated successfully, but these errors were encountered:
❓ Question
I have a VIT model for object detection. The model inference speed in the tensort 8.5 environment is 190ms per frame. However when I updated to TensorRT 9.3, Inference slowed down to 250ms per frame.
I acquired the C++ dynamic library by compiling the latest Torch-TensorRT source code.
What might be causing this issue?
Environment
conda
,pip
,libtorch
, source):The text was updated successfully, but these errors were encountered: