You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a YoloV8 model that I have converted into onnx and run on Jetson AGX Orin using deepstream and your lib.
I get a completely different result if I run the same source with the same model with ultralytics or if I use gstream deepstream.
Why can this be? The onnx conversion perhaps. But if so, can I run it without conversion using cfg and weights/wts files as https://wiki.seeedstudio.com/YOLOv8-DeepStream-TRT-Jetson/
NOTE: You can use your custom model, but it is important to keep the YOLO model reference (yolov8_) in your cfg and weights/wts filenames to generate the engine correctly.
Step 5. Generate the cfg, wts and labels.txt (if available) files (example for YOLOv8s)
python3 gen_wts_yoloV8.py -w yolov8s.pt
The number of objects found are MUCH less than with the ultralytics approach.
This is how I run ultralytics: yolo predict model=/mnt/M2Disk/Assets/YoloV8_Model/weights/best.pt source='/mnt/M2Disk/Assets/TestRun/jpg/Images' imgsz=640 save_txt=true save=false save_conf=true
For deepstream I combine deepstream-test1 and deepstream-test3 python_app from nVidia to allow multiple video sources, but I only run one for evaluation.
The text was updated successfully, but these errors were encountered:
Hi,
I have a YoloV8 model that I have converted into onnx and run on Jetson AGX Orin using deepstream and your lib.
I get a completely different result if I run the same source with the same model with ultralytics or if I use gstream deepstream.
Why can this be? The onnx conversion perhaps. But if so, can I run it without conversion using cfg and weights/wts files as
https://wiki.seeedstudio.com/YOLOv8-DeepStream-TRT-Jetson/
The number of objects found are MUCH less than with the ultralytics approach.
This is how I run ultralytics:
yolo predict model=/mnt/M2Disk/Assets/YoloV8_Model/weights/best.pt source='/mnt/M2Disk/Assets/TestRun/jpg/Images' imgsz=640 save_txt=true save=false save_conf=true
For deepstream I combine deepstream-test1 and deepstream-test3 python_app from nVidia to allow multiple video sources, but I only run one for evaluation.
The text was updated successfully, but these errors were encountered: