Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ONNX Inference Decoding #57

Open
rafaelagrc opened this issue Dec 13, 2022 · 3 comments
Open

ONNX Inference Decoding #57

rafaelagrc opened this issue Dec 13, 2022 · 3 comments

Comments

@rafaelagrc
Copy link

rafaelagrc commented Dec 13, 2022

Hello,

I have converted the 'PARSeq' model from the Torch Hub to the ONNX format.
I would like to ask if anyone has done inference and decoding with the ONNX model, as one cannot use the tokenizer.decode() function for this purpose.

@rafaelagrc rafaelagrc changed the title ONNX Inference ONNX Inference Decoding Dec 13, 2022
@dankernel
Copy link

In general, onnx can transform the model, but preprocessing and postprocessing does not.
tokenizer.decode is post-processing outside of the model, so it is not converted along with the model.
In my case, I just implemented it separately because the code is simple and there is no advantage of using an accelerator.

@Shivanshmundra
Copy link

@dankernel can you share your implementation of tokenizer outside of the model please?

@WongVi
Copy link

WongVi commented Mar 6, 2023

I found the torch version of parseq which is able to convert onnx and tensorrt too
https://github.com/bharatsubedi/PARseq_torch

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants