Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TensorRT Conversion of KeepTrack #425

Open
PradhanSomu opened this issue Mar 14, 2024 · 2 comments
Open

TensorRT Conversion of KeepTrack #425

PradhanSomu opened this issue Mar 14, 2024 · 2 comments

Comments

@PradhanSomu
Copy link

I am attempting to convert the pretrained weights of the KeepTrack model to TensorRT for inference. As I am new to TensorRT and ONNX, I would greatly appreciate any guidance or suggestions on how to successfully complete this conversion process.

@daisatojp
Copy link

Hi. There is a related issue. I'm glad if it helps you.
#333

@iMaTzzz
Copy link

iMaTzzz commented Mar 22, 2024

Hi, I'm currently trying to export object tracking models to onnx format and I may not immediately help you but I can give you some insight on what to do:

Basically the steps to export a model to onnx format for inference is simply provide torch.onnx.export (or torch.onnx.dynamo_export) an instance of the model and a dummy input. It will trace the model and capture a static computational graph. You could also use torch.onnx.dynamo_export to do same thing but keeping the dynamic nature of the model. You can look into the details here.
But it's sadly not that simple with state-of-the-art models for multiple reasons:

  • For recent models, there are some layers/computations that are not currently supported in onnx. I believe it's possible to create a custom layer to avoid this issue but you would have to do it yourself. As mentionned in any possible convert to onnx model? #333, you would have to code yourself PRoiPooling in a way that onnx can actually understand.
  • The other main issue I'm dealing with is the way the models are actually coded. Indeed, they are not coded the standard way as PyTorch was built in because you would usually find a forward and __init__ method to the model class. But in my case with the TaMOS model, there are initialize and track methods in the model class so simply providing an instance of the model to the export function won't work at all as it cannot understand it. To solve the issue, you basically have to rewrite the way the model inferences while keeping the architecture of the model especially if you are using pretrained weights like me.

The only available code I found in object detection (sadly not in object tracking) that actually exports recent models to onnx format was made by open-mmlab: mmdeploy

Feel free me to correct me if I'm wrong ! I will also try to write some code to export the TaMOS model so I will try to follow up in a few days.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants