Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

why define a new graph when running test ? #15

Open
00001101-xt opened this issue Mar 5, 2019 · 1 comment
Open

why define a new graph when running test ? #15

00001101-xt opened this issue Mar 5, 2019 · 1 comment

Comments

@00001101-xt
Copy link

00001101-xt commented Mar 5, 2019

To my knowledge, the test process may be simplified as follows:
restore the model and get tensors we need, including the input utterances and the output embedding matrix, something like:

# code below may not work, just for illustrating the idea
saver.restore(sess, model_path)
x = tf.get_default_graph().get_tensor_by_name("x:0")
embedding_matrix = tf.get_default_graph().get_tensor_by_name("embedding_matrix:0")

enroll = sess.run(embedding_matrix, feed_dict={x:next_batch()})
verif = sess.run(embedding_matrix, feed_dict={x:next_batch(start=M)})

enroll_center = cal_center(enroll)

S = calculate_similarity_matrix(verif, enroll)

If the above process is right, why define a new graph in test(), any differences except larger batch size?

Thanks.

@BingqingWei
Copy link

I guess the reason for defining a new graph is because the input dimension of the "batch" variable has changed. Your code would work but can be slower. In fact, that's why the 'batch' exists in Tensorflow in the first place: to train or test neural networks faster.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants