Skip to content

Commit

Permalink
Compute summaries in same session run as training
Browse files Browse the repository at this point in the history
This corrects a significant bug in which training batches alternated
between used for training and for calculating metrics. Manual inspection
of batches (with training shuffling turned off) and comparison to the
dataset revealed that only every other training batch was being fed in for
training. Validation batches were not affected. This is because each
call to sess.run advances the training iterator, so that by running
twice on each iteration every other batch was skipped.
  • Loading branch information
aribrill committed Nov 8, 2017
1 parent 8a32409 commit 280e191
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions scripts/train.py
Original file line number Diff line number Diff line change
Expand Up @@ -197,15 +197,15 @@ def load_val_data(index):
)

#training loop in session
#with sv.managed_session() as sess:
with sv.managed_session() as sess:
for i in range(epochs):
sess.run(training_init_op)
print("Epoch {} started...".format(i+1))
while True:
try:
sess.run([train_op,increment_global_step_op],feed_dict={training: True})
summ = sess.run(merged, feed_dict={training: True})
__, __, summ = sess.run([train_op,
increment_global_step_op, merged],
feed_dict={training: True})
sv.summary_computed(sess, summ)
except tf.errors.OutOfRangeError:
break
Expand Down

0 comments on commit 280e191

Please sign in to comment.