New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mnn推断比pytorch推断耗时长 #2835
Labels
question
Further information is requested
Comments
模型已经发送120543985邮箱 |
你ios上怎么开启 opencl 的?是模拟器么? |
ios 上一般 gpu 用 MNN_FORWARD_METAL |
不知道是否真的启用了opencl。就是把type属性设置为MNN_FORWARD_OPENCL了。把type属性设置为MNN_FORWARD_METAL后MNN推断耗费的时间更长了。比pytorch多1s左右 |
是用的真机iphone15plus跑的。不是模拟器 |
你 mnn 是怎么编译的? |
我直接下载你们的release 2.8.1上的ios framework也是同样的现象。 |
你测试方式是什么?一般是需要第二次 forward 开始计时,连续运行多次。参考 project/ios/Playground 和 tools/cpp/ModuleBasic.cpp 里面的速度测试 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
平台(如果交叉编译请再附上交叉编译目标平台):
ios
Github版本:
release 2.8.1
编译方式:
xcode run ios demo。mnn.framework用的2.8.1release
test脚本输出如下:
相关代码如下:输入是另一次推断的输出参数填充。
假若把上面的注释打开。也就是设置RuntimeManager。并把type设置为MNN_FORWARD_OPENCL,则推断比pytorch快一点点。但是感觉还是不达预期。之前用别的模型推测图片,mnn耗时是pytorch的1/9左右。
The text was updated successfully, but these errors were encountered: