You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using titan x (12G) to train the model on small_norb and it shows memory error when read in the data at : X = tf.convert_to_tensor(trainX, dtype=tf.float32) / 255.
Any ideas about why this happens? because i think 12G should be enouogh to load in all the data from small_norb dataset.
Thank you very much!
The text was updated successfully, but these errors were encountered:
Hi,
I am using titan x (12G) to train the model on small_norb and it shows memory error when read in the data at : X = tf.convert_to_tensor(trainX, dtype=tf.float32) / 255.
Any ideas about why this happens? because i think 12G should be enouogh to load in all the data from small_norb dataset.
Thank you very much!
The text was updated successfully, but these errors were encountered: