Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OOM #1

Open
PandaWhoCodes opened this issue Sep 30, 2017 · 6 comments
Open

OOM #1

PandaWhoCodes opened this issue Sep 30, 2017 · 6 comments

Comments

@PandaWhoCodes
Copy link
Contributor

Resource exhausted: OOM when allocating tensor with shape[]

Nvidia 2GB 1050Ti
What is your setup?

@PandaWhoCodes
Copy link
Contributor Author

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[512]
[[Node: dense_1/bias/Assign = Assign[T=DT_FLOAT, _class=["loc:@dense_1/bias"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/gpu:0"](dense_1/bias, dense_1/Const)]]

@sagar448
Copy link
Owner

Sorry, I did not see this until today. Its probably because you're not getting the right screenshot area? Are you sure you have the right section of the screen?

@PandaWhoCodes
Copy link
Contributor Author

Yes,
Im sure

@devdouglasferreira
Copy link

I'm having the same issue here. First with the batch size of 32 I was getting cudnn_status_alloc_failed on model.predict()
The I resized the batch size to 8 and the first error has gone but a this new one has showed up
image

Not even on a lower batch size of 4 it runs

My setup is the GTX 1060 3GB, Intel Core i5 7400 and 12GB RAM

@sagar448
Copy link
Owner

sagar448 commented Sep 2, 2018

Has the issue been resolved or it still persists?

@PandaWhoCodes
Copy link
Contributor Author

Yes it still does

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants