Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ultralytics inference vs deepstream same model not yield same results #531

Open
mgabell opened this issue Apr 22, 2024 · 0 comments
Open

Comments

@mgabell
Copy link

mgabell commented Apr 22, 2024

Hi,

I have a YoloV8 model that I have converted into onnx and run on Jetson AGX Orin using deepstream and your lib.
I get a completely different result if I run the same source with the same model with ultralytics or if I use gstream deepstream.

Why can this be? The onnx conversion perhaps. But if so, can I run it without conversion using cfg and weights/wts files as
https://wiki.seeedstudio.com/YOLOv8-DeepStream-TRT-Jetson/

NOTE: You can use your custom model, but it is important to keep the YOLO model reference (yolov8_) in your cfg and weights/wts filenames to generate the engine correctly.

    Step 5. Generate the cfg, wts and labels.txt (if available) files (example for YOLOv8s)

python3 gen_wts_yoloV8.py -w yolov8s.pt

The number of objects found are MUCH less than with the ultralytics approach.

This is how I run ultralytics:
yolo predict model=/mnt/M2Disk/Assets/YoloV8_Model/weights/best.pt source='/mnt/M2Disk/Assets/TestRun/jpg/Images' imgsz=640 save_txt=true save=false save_conf=true

For deepstream I combine deepstream-test1 and deepstream-test3 python_app from nVidia to allow multiple video sources, but I only run one for evaluation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant