You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have seen examples where classification models can be run on TensorRT in INT8 mode. But can you get specific on what I should do to calibrate the same and produce int8 engine for DETECTOR models(onnx)
Environment
TensorRT Version:7 GPU Type: T4 Nvidia Driver Version: 440 CUDA Version: 10.2 CUDNN Version: Operating System + Version: 18 Python Version (if applicable): TensorFlow Version (if applicable): PyTorch Version (if applicable): Baremetal or Container (if container which image + tag):
Relevant Files
Steps To Reproduce
The text was updated successfully, but these errors were encountered:
Description
I have seen examples where classification models can be run on TensorRT in INT8 mode. But can you get specific on what I should do to calibrate the same and produce int8 engine for DETECTOR models(onnx)
Environment
TensorRT Version:7
GPU Type: T4
Nvidia Driver Version: 440
CUDA Version: 10.2
CUDNN Version:
Operating System + Version: 18
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Relevant Files
Steps To Reproduce
The text was updated successfully, but these errors were encountered: