Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cudaSuccess (4 vs. 0) unspecified launch failure #10

Open
pauldelmusica opened this issue Jun 10, 2017 · 5 comments
Open

cudaSuccess (4 vs. 0) unspecified launch failure #10

pauldelmusica opened this issue Jun 10, 2017 · 5 comments

Comments

@pauldelmusica
Copy link

pauldelmusica commented Jun 10, 2017

No description provided.

@pauldelmusica
Copy link
Author

pauldelmusica commented Jun 10, 2017

deep_image_analogy.exe ../models/ ../demo/content.png ../demo/style.png ../demo/output/ 0 0.2 2 0

changing ratio from 0.5 to 0.2 gets the program to run on GTX 1070 with 8GB.

@dany-on-demand
Copy link

dany-on-demand commented Aug 9, 2017

Can you fire up some gpu analysis tool and check memory usage? Mine never goes above 1.8Gb and I still get this error

Did you compile with or without cuDNN? Did you use CUDA 7.5 or 8.0?

Have you tried making only one-way style transfers like suggested in another issue? If so, how did you make that happen?

@Callidior
Copy link

I also can't run the application due to memory problems (presumably) on my GeForce GTX 1050, which has 4 GB of GPU memory. Lowering the ratio works, since it will scale down to images before passing them through the network, but the results are pretty bad.

On a Tesla K40c with 12 GB, everything works fine. However, the memory consumption there seems to never exceed 1 GB, so I wonder why I can't run it on my GPU.

@dany-on-demand
Copy link

dany-on-demand commented Aug 13, 2017

@Callidior Yep I "solved" my problem the same way, a Tesla K80 can handle maximum resolution at 1.0 ratio

I've heard about memory spikes, maybe it does run out of memory but it does so quickly the sensor app doesn't pick it up?

Well I'm glad actually, at least this confirms that this is not a temp throttling issue

And yeah the results are "pretty bad" alright

@rozentill
Copy link
Member

See if disable the time limit (TDR) would help #25

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants