Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

trainable=False doesn't work for QuantizeWrapperV2 #1067

Open
jiannanWang opened this issue May 2, 2023 · 1 comment
Open

trainable=False doesn't work for QuantizeWrapperV2 #1067

jiannanWang opened this issue May 2, 2023 · 1 comment
Assignees
Labels
bug Something isn't working

Comments

@jiannanWang
Copy link

Prior to filing: check that this should be a bug instead of a feature request. Everything supported, including the compatible versions of TensorFlow, is listed in the overview page of each technique. For example, the overview page of quantization-aware training is here. An issue for anything not supported should be a feature request.

Describe the bug
Setting layer.trainable=False for a QuantizeWrapperV2 wrapped Dense layer doesn't convert all trainable weights to non-trainable.

System information

TensorFlow version (installed from source or binary):
2.12.0

TensorFlow Model Optimization version (installed from source or binary):
0.7.4

Python version:
3.10.11

Describe the expected behavior
Like the normal model, by setting layer.trainable=False for the quantized layer, the layer's weights should all become non-trainable.

Describe the current behavior
After setting layer.trainable=False for a quantized Dense layer, the Dense layer still contains trainable weights.

Code to reproduce the issue
The colab contains the code to reproduce this bug.
https://colab.research.google.com/drive/1KYnZkBI_g3Pu9Vqz4UneXCtNOs3_kB39?usp=sharing

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
The output from the colab is below. Noted that by setting layer.trainable=False, the original model's Dense layer's weights become non-trainable. However, the dense_11/kernel:0 from the Dense layer in the quantized model is still trainable after setting layer.trainable=False.

trainable variables in model:  2
trainable variables in model after setting trainable to false:  0
trainable variables in quantized model:  1
trainable variables in quantized model after setting trainable to false:  1
dense_11/kernel:0
@jiannanWang jiannanWang added the bug Something isn't working label May 2, 2023
@cdh4696 cdh4696 self-assigned this May 3, 2023
@MATTYGILO
Copy link

Any progress?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants