Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

关于转onnx的问题 #117

Open
BaoBaoJianqiang opened this issue Mar 18, 2023 · 5 comments
Open

关于转onnx的问题 #117

BaoBaoJianqiang opened this issue Mar 18, 2023 · 5 comments

Comments

@BaoBaoJianqiang
Copy link

我看过您的convertor代码,已经成功转换为cpu版本,可以运行,大约11s一张图片。
为了进一步提升速度,我尝试转onnx,但是遇到了问题,还请指教,给出正确的转换方法,代码如下(写在TestModel.py的init方法的model.load_state_dict(d)之后):
import onnx
import onnxruntime
export_onnx_file = './net.onnx'
torch.onnx.export(model,
torch.randn(1,1,224,224,device='cuda'),
export_onnx_file,
verbose=False,
input_names = ["inputs"]+["params_%d"%i for i in range(120)],
output_names = ["outputs"],
opset_version = 10
do_constant_folding = True,
dynamic_axes = {"inputs":{0:"batch_size"}, 2:"h", 3:"w", "outputs":{0: "batch_size"}})

        net = onnx.load('./net.onnx') 
        onnx.checker.check_model(net) 
        onnx.helper.printable_graph(net.graph) 
@Zerohertz
Copy link

Zerohertz commented Mar 20, 2023

I think it's because tensor tracking is impossible in the upsampling process.

Try this!

fpem_v2.py

   def _upsample_add(self, x, y):
        # _, _, H, W = y.size()
        # return F.interpolate(x, size=(H, W), mode='bilinear') + y
        _, _, H, W = y.size()
        upsample = nn.Upsample(size=(H, W), mode='bilinear')#, align_corners=True)
        return upsample(x) + y

pan_pp.py

    def _upsample(self, x, size, scale=1):
        # _, _, H, W = size
        # return F.interpolate(x, size=(H // scale, W // scale), mode='bilinear')
        _, _, H, W = size
        upsample = nn.Upsample(size=(H // scale, W // scale), mode='bilinear')#, align_corners=True)
        return upsample(x)

export2onnx

    dynamic_axes = {
        'in': {
            0: 'batch',
            2: 'Width',
            3: 'Height'
        },
        'out': {
            0: 'batch',
            2: 'Height',
            3: 'Width'
        }
    }

    torch.onnx.export(
        model,
        inputData,
        "test.onnx",
        input_names=["in"],
        output_names=["out"],
        dynamic_axes=dynamic_axes,
    )

@BaoBaoJianqiang
Copy link
Author

请问这样的改动,是否需要重新训练?然后再生成onnx?

@BaoBaoJianqiang
Copy link
Author

您这个了的inputData,是我之前在代码中提供的值吗?

@BaoBaoJianqiang
Copy link
Author

能否同时支持cpu和gpu?

@Zerohertz
Copy link

Please check this code!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants