You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If I donot use batch, I get a OOM Error, I want to know to handle OOM error for CNN. If can show me some code for solution, I would be extremely grateful!
The text was updated successfully, but these errors were encountered:
our current implementation only supports batching when the batch size divides the number of training (and test/val) points.
I think the easiest workaround would be to
Use the nt.predict.gp_inferece API which performs equivalent computation but accepts k_train_train/test/val covariance matrices as inputs.
Compute these input k_train_train (k_train_test, k_train_val) matrices by calling nt.batch(kernel_fn, batch_size=10) on pairs of train, train (train, test, train, val), where each train/test/val are first padded with dummy rows so that their sizes are divisible by 10, and the resulting matrices k_train_train (k_train_test, k_train_val) are then truncated to remove the dummy covariance entries.
Here is my code, I use a simple CNN for classification. Size of data is
But I get an OOM Error, I want to know how to use batch in function
gradient_descent_mse_ensemble
forpredict_fn
Here is error,
If I donot use batch, I get a OOM Error, I want to know to handle OOM error for CNN. If can show me some code for solution, I would be extremely grateful!
The text was updated successfully, but these errors were encountered: