Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add batch norm to default_n_bit_quantize_registry and default_8_bit_quantize_registry #1099

Open
DerryFitz opened this issue Sep 28, 2023 · 2 comments
Assignees
Labels
feature request feature request

Comments

@DerryFitz
Copy link

DerryFitz commented Sep 28, 2023

  • TensorFlow version (you are using): 2.13

Motivation
There are many models which use batch norm in places where it is not covered by the existing cases allowed in the registry.
Adding batch norm to the registry would allow users to apply QAT to such models.

At present I am editing both registries by adding the line

_QuantizeInfo(layers.BatchNormalization, ['gamma'], [], True),

which works for my case, but it would be nice to have a more general fix

@DerryFitz DerryFitz added the feature request feature request label Sep 28, 2023
@doyeonkim0
Copy link
Member

@Xhark Could you take a look at this issue? Thank you! :)

@DerryFitz
Copy link
Author

Any update on this?

@tucan9389 tucan9389 self-assigned this Apr 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request feature request
Projects
None yet
Development

No branches or pull requests

3 participants