About tensorflow Dataset combined with TFRecord this regard see very good article:
https://cloud.tencent.com/developer/article/1088751
github:
dataset points:
Usually the first shuffle, then the batch, and then repeat
Of course, you can then repeat the first batch, and to do so in front of a difference is there until the last batch is part of the batch of data had appeared in the test set to do so with caution.
dataset的one_shot_iterator和make_initializable_iterator
See a question on stackoverflow:
Actually, I think the main question to ask is not the answer, says the Lord meant, in fact, only one-shot iterator iteration round, initializable iterator can iterate through multiple rounds (implemented by sess.run (iterator.initializer). So this iterator difference between the two is very obvious, in other words, the main problem of the second code segment, except the first epoch 0, all remaining epoch are no data error.
You can see the following code to test:
import tensorflow as tf import numpy as np dataset = tf.data.Dataset.from_tensor_slices(np.random.uniform(size=(5, 2))).shuffle(100).batch(2) iterator = dataset.make_initializable_iterator() # iterator = dataset.make_one_shot_iterator() one_element = iterator.get_next() with tf.Session() as sess: for i in range(5): sess.run(iterator.initializer) while True: try: print(sess.run(one_element)) except tf.errors.OutOfRangeError: print("Epoch %s is done." % i) break