Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

windows下是否支持GPU推理? #2833

Open
puxuntu opened this issue Apr 17, 2024 · 2 comments
Open

windows下是否支持GPU推理? #2833

puxuntu opened this issue Apr 17, 2024 · 2 comments
Labels
question Further information is requested User The user ask question about how to use. Or don't use MNN correctly and cause bug.

Comments

@puxuntu
Copy link

puxuntu commented Apr 17, 2024

我没有编译源代码,是直接使用的release 2.8.1的C++ windows包。

模型加载和config代码如下:
std::shared_ptr net;
net.reset(Interpreter::createFromFile(modelPath));
if (net == nullptr) {
MNN_ERROR("Invalid Model\n");
return 0;
}
ScheduleConfig config;
config.type = MNN_FORWARD_CUDA;
config.mode = MNN_GPU_TUNING_NORMAL | MNN_GPU_MEMORY_IMAGE;
auto session = net->createSession(config);
auto input = net->getSessionInput(session, nullptr);

控制台输出:
Can't Find type=2 backend, use 0 instead

@jxt1234 jxt1234 added the question Further information is requested label Apr 17, 2024
@jxt1234
Copy link
Collaborator

jxt1234 commented Apr 17, 2024

默认发的版本支持 opencl / vulkan ,cuda 的话需要自行编译

@jxt1234 jxt1234 added the User The user ask question about how to use. Or don't use MNN correctly and cause bug. label Apr 17, 2024
@sg0771
Copy link

sg0771 commented Apr 18, 2024

Win32 总结
OpenGL 无法直接编译,需要 glew+ libGLES ,还需要修改不少配置和代码,勉强编译通过也不能正常运行
cuda 使用11.7 的 cudatoolkit 编译出错
opencl 可以编译和运行,benchmark相比cpu执行时间大约1/6 ---- 1/2 , 但是进程管理器里面没有提示使用 GPU
vulkan 可以编译和运行,正常调用GPU (如果机器比较老不支持会自动切换到cpu)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested User The user ask question about how to use. Or don't use MNN correctly and cause bug.
Projects
None yet
Development

No branches or pull requests

3 participants