You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have converted the 'PARSeq' model from the Torch Hub to the ONNX format.
I would like to ask if anyone has done inference and decoding with the ONNX model, as one cannot use the tokenizer.decode() function for this purpose.
The text was updated successfully, but these errors were encountered:
In general, onnx can transform the model, but preprocessing and postprocessing does not. tokenizer.decode is post-processing outside of the model, so it is not converted along with the model.
In my case, I just implemented it separately because the code is simple and there is no advantage of using an accelerator.
Hello,
I have converted the 'PARSeq' model from the Torch Hub to the ONNX format.
I would like to ask if anyone has done inference and decoding with the ONNX model, as one cannot use the
tokenizer.decode()
function for this purpose.The text was updated successfully, but these errors were encountered: