New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CUDNN_STATUS_INTERNAL_ERROR on GTX 1660 TI #27144
Comments
I am having the same issue on RTX 2080 and Tf 2.0, #27141 |
Duplicate #24496 |
Looks like setting Is this the way forward and is there a way to set this by default? Seems like it will require me to touch a large number of files in my unit tests... Thanks! |
I am glad it hack worked for you. Unfortunately, Currently we don't have a feature to set |
in python keras package modify file tensorflow_backend.py add by sloan fix CUDNN_STATUS_INTERNAL_ERROR
|
Please make sure that this is a bug. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:bug_template
System information
You can collect some of this information using our environment capture script
You can also obtain the TensorFlow version with
python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"
Describe the current behavior
Using any convolution layers fails. Same error with all both versions of CUDA.
~/.nv
directory cleared between runs. Same error in docker images as well. CUDNN verified to be working correctly with simple CUDNN programs (e.g. this)Describe the expected behavior
Code should print an array of 16 numbers
Code to reproduce the issue
Provide a reproducible test case that is the bare minimum necessary to generate the problem.
Other info / logs
The text was updated successfully, but these errors were encountered: