Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

为什么YOLOv5-Lite的推理会比YOLOv5n的推理速度要慢? #249

Open
pcycccccc opened this issue Dec 8, 2023 · 2 comments
Open

Comments

@pcycccccc
Copy link

首先,很感谢大佬分享的代码,可以让我们了解和使用这个模型!
我在特定的数据集上分别训练了YOLOv5-Lite-e模型和YOLOv5n模型,YOLOv5-Lite-e(onnx文件大小为2.77M)和YOLOv5n(onnx文件为7.14M),从文件大小来看,YOLOv5-Lite的模型更小,按理来说推理速度要更快一些,但是我在C++推理的过程中,发现YOLOv5-Lite的推理速度更慢一点。
vs2017版,YOLOv5-Lite-e(onnxruntime 1.10.0,因为升级到1.15.0会报错),CPU推理得到的时间为:28ms,
image
vs2017版,YOLOv5n(onnxruntime 1.15.0),CPU推理得到的时间为:19ms,
image
这是为什么呢?请问大佬是否有测试过时间吗

@ppogg
Copy link
Owner

ppogg commented Dec 12, 2023

非常感谢你的测评,实在是太棒了,对于你的问题,回复如下:
1.e,s是为资源匮乏的端侧设备设计的模型,建议使用ARM端CPU进行测试,以下是两种设备测试的结果:
红米k30:

小米10:

另外,x86只有28ms有点慢,为了更好发挥e模型的性能,我将使用mnn重新写一套sdk,近期发到交流群,届时可再测评

@ppogg
Copy link
Owner

ppogg commented Dec 27, 2023

新的ONNXRUNTIME推理sdk,请使用:
https://github.com/ppogg/YOLOv5-Lite/tree/master/cpp_demo/ort

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants