Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix LayerProfile class check with SavedModels #719

Draft
wants to merge 5 commits into
base: main
Choose a base branch
from
Draft

Conversation

jneeven
Copy link
Contributor

@jneeven jneeven commented Oct 29, 2021

(
    isinstance(layer, tensorflow.python.keras.saving.saved_model.load.QuantConv2D) != 
    isinstance(layer, larq.layers.QuantConv2D)
)

Unfortunately there is no way to check whether the RevivedLayer with name QuantConv2D was originally a larq layer and not some custom layer with the same name, but the only situation in which that'd break is if you not only subclass a larq layer with an identical name, but also change whether it has MACs, which seems extremely unlikely.

@jneeven jneeven added the bug Something isn't working label Oct 29, 2021
@jneeven jneeven requested a review from a team October 29, 2021 16:04
larq/models.py Outdated Show resolved Hide resolved
Copy link
Member

@lgeiger lgeiger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool! Can you also add a unittest to make sure we don't run into this issue again?

larq/models.py Show resolved Hide resolved
larq/models_test.py Outdated Show resolved Hide resolved
@jneeven
Copy link
Contributor Author

jneeven commented Nov 1, 2021

@lgeiger The test was a good call, it turns out there is a more serious issue at play here as well. I've pushed an unfinished test that prints the weightprofiles of the layer before and after SavedModel loading, and they're different:

# op_profiles, [w.bitwidth for w in profile.weight_profiles], mac_containing_layer, input_precision
[OperationProfile(n=1179648, precision=1, op_type='mac')] [1, 32] True 1
[OperationProfile(n=1179648, precision=32, op_type='mac')] [32, 32] True 1

At this point this is a bit beyond the parts of Larq I'm familiar with, so I'll have to postpone fixing this for now. I suspect the weight precision somehow isn't part of the layer config or something like that...

@jneeven jneeven marked this pull request as draft November 1, 2021 12:48
@jneeven
Copy link
Contributor Author

jneeven commented Nov 1, 2021

To expand on the above, the problem is as follows:

from tempfile import TemporaryDirectory

import tensorflow as tf
from tensorflow.python.keras.utils.generic_utils import get_custom_objects

import larq as lq

model = tf.keras.models.Sequential(
    [
        lq.layers.QuantConv2D(
            filters=32,
            kernel_size=(3, 3),
            kernel_quantizer="ste_sign",
            input_quantizer="ste_sign",
            input_shape=(64, 64, 1),
            padding="same",
        )
    ]
)

# Save and reload
with TemporaryDirectory() as dir:
    model.save(dir)
    del get_custom_objects()["QuantConv2D"]
    loaded_model = tf.keras.models.load_model(dir, compile=False)

# Pre-save
print(type(model.layers[0].weights[0]))
print(model.layers[0].weights[0].precision)


# Loaded model
print(type(loaded_model.layers[0].weights[0]))
print(loaded_model.layers[0].weights[0].precision)
<class 'larq.quantized_variable.QuantizedVariable'>
1
<class 'tensorflow.python.ops.resource_variable_ops.UninitializedVariable'>
Traceback (most recent call last):
  File "/mnt/windows/share/Plumerai/larq/test.py", line 34, in <module>
    print(loaded_model.layers[0].weights[0].precision)
AttributeError: 'UninitializedVariable' object has no attribute 'precision'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants