Skip to content
This repository has been archived by the owner on Aug 3, 2021. It is now read-only.

How can we run inference with a pb file #547

Open
pratapaprasanna opened this issue Aug 1, 2020 · 1 comment
Open

How can we run inference with a pb file #547

pratapaprasanna opened this issue Aug 1, 2020 · 1 comment

Comments

@pratapaprasanna
Copy link

Hi all,

I have a couple of questions.

1- I could freeze my model and I have a pb file how can I use this pb file to run inference.?

2- Do we get an increase in inference speed if we freeze the graph rather than use the existing checkpoints?

3- Do we have any ways to run Inference fast in cpu? The current speed looks very slow

Thanks in advance

@VictorBeraldo
Copy link

@pratapaprasanna How did you run inference on CPU? Are you using CPU on speech to text task? I could really use some help... Thanks!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants