Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

请问模型如何转换成onnx? #36

Open
wting861006 opened this issue Feb 10, 2022 · 2 comments
Open

请问模型如何转换成onnx? #36

wting861006 opened this issue Feb 10, 2022 · 2 comments

Comments

@wting861006
Copy link

No description provided.

@breezedeus
Copy link
Owner

嗯,应该就是标准的转换流程吧。

@wting861006
Copy link
Author

按照标准流程走我导出失败了,代码如下:
img_size=self.resized_shape
img = torch.zeros(1, 3, *img_size).cuda()
input_name = ["images"]
output_name = ["output"]

    torch.onnx.export(self.model, img, save_path, verbose=True, opset_version=12,
                      input_names=input_name, output_names=output_name)

输出信息如下:
C:\Anaconda3\python.exe K:/cnocr/cnstd-master/std_detect.py
torch.Size([1, 3, 768, 768])
[WARNING 2022-02-14 11:07:35,724 _showwarnmsg:109] C:\Anaconda3\lib\site-packages\torchvision\models\shufflenetv2.py:23: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
channels_per_group = num_channels // groups

[WARNING 2022-02-14 11:07:35,808 _showwarnmsg:109] C:\Anaconda3\lib\site-packages\torch\nn\functional.py:3679: UserWarning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and now uses scale_factor directly, instead of relying on the computed output size. If you wish to restore the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details.
warnings.warn(

[WARNING 2022-02-14 11:07:35,824 _showwarnmsg:109] K:\cnocr\cnstd-master\model\dbnet.py:243: TracerWarning: Converting a tensor to a NumPy array might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
prob_map.squeeze(1).detach().cpu().numpy().astype(np.float32)

[WARNING 2022-02-14 11:07:35,874 _showwarnmsg:109] K:\cnocr\cnstd-master\model\dbnet.py:250: TracerWarning: torch.from_numpy results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
out=torch.from_numpy(out).cuda()

graph():
%output : Float(1, strides=[1], requires_grad=0, device=cuda:0) = onnx::Constantvalue={0}
return (%output)

onnx大小只有1k,应该是有问题的。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants