We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
首先,很感谢大佬分享的代码,可以让我们了解和使用这个模型! 我在特定的数据集上分别训练了YOLOv5-Lite-e模型和YOLOv5n模型,YOLOv5-Lite-e(onnx文件大小为2.77M)和YOLOv5n(onnx文件为7.14M),从文件大小来看,YOLOv5-Lite的模型更小,按理来说推理速度要更快一些,但是我在C++推理的过程中,发现YOLOv5-Lite的推理速度更慢一点。 vs2017版,YOLOv5-Lite-e(onnxruntime 1.10.0,因为升级到1.15.0会报错),CPU推理得到的时间为:28ms, vs2017版,YOLOv5n(onnxruntime 1.15.0),CPU推理得到的时间为:19ms, 这是为什么呢?请问大佬是否有测试过时间吗
The text was updated successfully, but these errors were encountered:
非常感谢你的测评,实在是太棒了,对于你的问题,回复如下: 1.e,s是为资源匮乏的端侧设备设计的模型,建议使用ARM端CPU进行测试,以下是两种设备测试的结果: 红米k30: 小米10: 另外,x86只有28ms有点慢,为了更好发挥e模型的性能,我将使用mnn重新写一套sdk,近期发到交流群,届时可再测评
Sorry, something went wrong.
新的ONNXRUNTIME推理sdk,请使用: https://github.com/ppogg/YOLOv5-Lite/tree/master/cpp_demo/ort
No branches or pull requests
首先,很感谢大佬分享的代码,可以让我们了解和使用这个模型!
我在特定的数据集上分别训练了YOLOv5-Lite-e模型和YOLOv5n模型,YOLOv5-Lite-e(onnx文件大小为2.77M)和YOLOv5n(onnx文件为7.14M),从文件大小来看,YOLOv5-Lite的模型更小,按理来说推理速度要更快一些,但是我在C++推理的过程中,发现YOLOv5-Lite的推理速度更慢一点。
vs2017版,YOLOv5-Lite-e(onnxruntime 1.10.0,因为升级到1.15.0会报错),CPU推理得到的时间为:28ms,
vs2017版,YOLOv5n(onnxruntime 1.15.0),CPU推理得到的时间为:19ms,
这是为什么呢?请问大佬是否有测试过时间吗
The text was updated successfully, but these errors were encountered: