Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Onnx] Integrate an interface to easily load exported ONNX models [will be updated] #984

Closed
1 of 7 tasks
felixdittrich92 opened this issue Jul 12, 2022 · 3 comments · Fixed by #1601
Closed
1 of 7 tasks
Labels
topic: onnx ONNX-related type: enhancement Improvement
Milestone

Comments

@felixdittrich92
Copy link
Contributor

felixdittrich92 commented Jul 12, 2022

🚀 The feature

We need to:

  • ensure all exported models can provide the information we need (for example: which postprocessor, vocab ,etc.)
  • setup build without TF and PT -> Onnxruntime should be enough
  • build an equivalent to the current ocr_predictor
  • make the predictor configurable about CPU / GPU (runtime configs)
  • add ability to export models in detection/recognition reference scripts
  • replace documentation section: preparing your model for inference with onnx saving / loading for prod

To solve before:

Other related Issues:
#789 #790

Related discussion:
#981

Motivation, pitch

As a user after exporting my models in onnx format i want to easily load these in docTR

Something like:

#-> same way as ocr_predictor

predictor = onnx_ocr_predictor(
    det_model='path/db_mobilenet_v3_small.onnx, 
    reco_model='path/crnn_mobilenet_v3_small.onnx,
    provider='gpu' (default='cpu')
)

#-> same way as ocr_predictor
@felixdittrich92 felixdittrich92 added this to the 0.6.0 milestone Jul 12, 2022
@frgfm
Copy link
Collaborator

frgfm commented Jul 20, 2022

Hey Felix :)

Thanks for the suggestion!
So I have experimented with that and there are a few things I think we should consider:

  • exporting to ONNX require way more than loading it : the original DL backend framework, the architecture, the parameter values and whatever dependencies allow the export
  • loading ONNX is made for light environments & fast inference : we dont need the original DL backend framework, only ONNX and the archi+param file

So my point is that loading ONNX file with docTR is doable but it will be significantly heavier than loading in a separate lighter environments 👍

That being said, even if it's heavier, we could do it for developers to play with it. If so, we need to add a new supported backend 🤷‍♂️

What do you think?

@felixdittrich92
Copy link
Contributor Author

Hi @frgfm 👋,

in short yes this would be the plan 😅
a bit more in detail what i have had in mind: #981

@felixdittrich92 felixdittrich92 modified the milestones: 0.6.0, 0.7.0 Sep 26, 2022
@felixdittrich92 felixdittrich92 modified the milestones: 0.9.0, 2.0.0 Feb 9, 2024
@felixdittrich92
Copy link
Contributor Author

Add link in docs then we can close

https://github.com/felixdittrich92/OnnxTR

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
topic: onnx ONNX-related type: enhancement Improvement
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants