Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ONNX runtime inference compatibility #904

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open

Conversation

maikimati
Copy link
Contributor

ONNX models compatibility to speed up CPU inference in object detection

@karl-joan
Copy link
Contributor

karl-joan commented Jul 14, 2023

Hey @maikimati, I have some thoughts regarding this implementation.

  • Perhaps it would be wise to create a separate function for the image preprocessing in the case I would like to overwrite it.
  • The current resizing doesn't maintain aspect ratio, but the slices needn't be squares. The implementation in YOLOv8 first resizes the longest side and then pads the remaining space. There's also one pull request open for OpenVINO implementation (OpenVino support for yolov8 object detection #896) with the same resizing scheme.
  • It would be nice, if you could provide the load_model function a dictionary of options to set up the inference session, including alternative execution provider such as OpenVINO.

@karl-joan karl-joan mentioned this pull request Jul 28, 2023
Copy link
Collaborator

@fcakyon fcakyon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

code styling errors need to be fixed, tests should be added, a demo notebook should be included :)

maikimati and others added 2 commits November 25, 2023 22:54
ONNX models compatibility to speed up CPU inference in object detections
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants