Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Time taken for multi_round_infer.sh to run? #15

Open
pidugusundeep opened this issue Apr 1, 2020 · 3 comments
Open

Time taken for multi_round_infer.sh to run? #15

pidugusundeep opened this issue Apr 1, 2020 · 3 comments

Comments

@pidugusundeep
Copy link

Iam running the scripts on gitpod and it's taking a long time? May I know how much time it usually takes?

@alexrus
Copy link

alexrus commented Apr 17, 2020

How big in your test dataset?

@melisa-writer
Copy link

melisa-writer commented Apr 27, 2020

Hi, I am facing the same problem. Here are my (unsuccessful) attempts:

  • Set batch size to 1 and tried decoding on CPU. For sentences ~10 tokens I am getting ~6 sec decoding time.

  • Exported the estimator using tf.estimator.export.ServingInputReceiver and estimator.export_saved_model. The decoding time stayed almost the same.

  • Tried decoding on TPU in Colab. I am getting the following error:

INFO:tensorflow:Restoring parameters from PIE_ckpt/pie_model.ckpt
INFO:tensorflow:Error recorded from prediction_loop: Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

Op type not registered 'MapAndBatchDatasetV2' in binary running on n-dfbb99ed-w-0. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
INFO:tensorflow:prediction_loop marked as finished
WARNING:tensorflow:Reraising captured error
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1334, in _do_call
    return fn(*args)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1317, in _run_fn
    self._extend_graph()
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1352, in _extend_graph
    tf_session.ExtendSession(self._session)
tensorflow.python.framework.errors_impl.NotFoundError: Op type not registered 'MapAndBatchDatasetV2' in binary running on n-dfbb99ed-w-0. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.

Would you mind dropping some hints how to achieve the decoding speed reported in the paper?

@alexrus
Copy link

alexrus commented Apr 29, 2020

@melisa-qordoba please give more details on how you exported and then imported the estimator.

What was the batch size when you used the exported estimator, still 1? if yes, try with more and see if there is any improvement.

As the paper noted, this was made for accuracy rather than speed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants