You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To my knowledge, the test process may be simplified as follows:
restore the model and get tensors we need, including the input utterances and the output embedding matrix, something like:
# code below may not work, just for illustrating the ideasaver.restore(sess, model_path)
x=tf.get_default_graph().get_tensor_by_name("x:0")
embedding_matrix=tf.get_default_graph().get_tensor_by_name("embedding_matrix:0")
enroll=sess.run(embedding_matrix, feed_dict={x:next_batch()})
verif=sess.run(embedding_matrix, feed_dict={x:next_batch(start=M)})
enroll_center=cal_center(enroll)
S=calculate_similarity_matrix(verif, enroll)
If the above process is right, why define a new graph in test(), any differences except larger batch size?
Thanks.
The text was updated successfully, but these errors were encountered:
I guess the reason for defining a new graph is because the input dimension of the "batch" variable has changed. Your code would work but can be slower. In fact, that's why the 'batch' exists in Tensorflow in the first place: to train or test neural networks faster.
To my knowledge, the test process may be simplified as follows:
restore the model and get tensors we need, including the input utterances and the output embedding matrix, something like:
If the above process is right, why define a new graph in
test()
, any differences except larger batch size?Thanks.
The text was updated successfully, but these errors were encountered: