Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to convert the pretrained model to Onnx or TensorRT #35

Open
GuardSkill opened this issue Jul 14, 2021 · 3 comments
Open

How to convert the pretrained model to Onnx or TensorRT #35

GuardSkill opened this issue Jul 14, 2021 · 3 comments

Comments

@GuardSkill
Copy link

I found the deployment of StyleGAN is very difficult because of custom ops. Could you provide some help?

@zsyzzsoft
Copy link
Owner

It is possible to convert the custom ops to regular ops. StyleGAN's authors have implemented this. You can pass impl='ref' to each call of e.g. this function.

@GuardSkill
Copy link
Author

It is possible to convert the custom ops to regular ops. StyleGAN's authors have implemented this. You can pass impl='ref' to each call of e.g. this function.

Very thanks for your help and reply! I have tried it yesterday by passing impl='ref' in all invoked functions, but I think because I load your model by pickle, it doesn't work. And I am confused about the code about the dnnlib.tflib.Network Class. So I replace the Cuda python API function of ops with the reference function directly, And it works for me to convert it into Onnx model. I successfully convert it into Onnx model today! By the way, the Onnx model inference time is about 0.5-1s under Onnx runtime, the GPU memory occupation is less than 3G, Awsome! Thanks for your reply and concern!!! XD

@duygiangdg
Copy link

Hi @GuardSkill. Could you share the code to convert the pretrained model to Onnx

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants