Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: 'CustomLayerMaxPooling1D' object has no attribute 'kernel' #1085

Open
konstantinatopali opened this issue Aug 16, 2023 · 1 comment
Assignees
Labels
feature request feature request

Comments

@konstantinatopali
Copy link

Hi, when I am trying Quantization Aware Training on my model, I get the following error in my 'CustomLayerMaxPooling1D' :

AttributeError Traceback (most recent call last)
Cell In [62], line 9
1 # quantize_apply requires mentioning MyLSTMQuantizeConfig with quantize_scope
2 # as well as the custom Keras layer.
3 with tfmot.quantization.keras.quantize_scope({'MyLSTMQuantizeConfig': MyLSTMQuantizeConfig, 'CustomLayerLSTM': CustomLayerLSTM,
4 'MyConv1DQuantizeConfig': MyConv1DQuantizeConfig,
5 'CustomLayerConv1D': CustomLayerConv1D,
6 'MyMaxPooling1DQuantizeConfig': MyMaxPooling1DQuantizeConfig,
7 'CustomLayerMaxPooling1D': CustomLayerMaxPooling1D}):
8 # Use quantize_apply to actually make the model quantization aware.
----> 9 quant_aware_model = tfmot.quantization.keras.quantize_apply(new_model)
11 quant_aware_model.summary()

File ~\AppData\Roaming\Python\Python39\site-packages\tensorflow_model_optimization\python\core\keras\metrics.py:74, in MonitorBoolGauge.call..inner(*args, **kwargs)
72 except Exception as error:
73 self.bool_gauge.get_cell(MonitorBoolGauge._FAILURE_LABEL).set(True)
---> 74 raise error

File ~\AppData\Roaming\Python\Python39\site-packages\tensorflow_model_optimization\python\core\keras\metrics.py:69, in MonitorBoolGauge.call..inner(*args, **kwargs)
66 @functools.wraps(func)
67 def inner(*args, **kwargs):
68 try:
---> 69 results = func(*args, **kwargs)
70 self.bool_gauge.get_cell(MonitorBoolGauge._SUCCESS_LABEL).set(True)
71 return results

File ~\AppData\Roaming\Python\Python39\site-packages\tensorflow_model_optimization\python\core\quantization\keras\quantize.py:496, in quantize_apply(model, scheme, quantized_layer_name_prefix)
490 quantize_registry = scheme.get_quantize_registry()
492 # 4. Actually quantize all the relevant layers in the model. This is done by
493 # wrapping the layers with QuantizeWrapper, and passing the associated
494 # QuantizeConfig.
--> 496 return keras.models.clone_model(
497 transformed_model, input_tensors=None, clone_function=_quantize)

File ~\anaconda3\lib\site-packages\keras\models\cloning.py:502, in clone_model(model, input_tensors, clone_function)
499 clone_function = _clone_layer
501 if isinstance(model, Sequential):
--> 502 return _clone_sequential_model(
503 model, input_tensors=input_tensors, layer_fn=clone_function
504 )
505 else:
506 return _clone_functional_model(
507 model, input_tensors=input_tensors, layer_fn=clone_function
508 )

File ~\anaconda3\lib\site-packages\keras\models\cloning.py:372, in _clone_sequential_model(model, input_tensors, layer_fn)
367 layers, ancillary_layers = _remove_ancillary_layers(
368 model, layer_map, layers
369 )
371 if input_tensors is None:
--> 372 cloned_model = Sequential(layers=layers, name=model.name)
373 elif len(generic_utils.to_list(input_tensors)) != 1:
374 raise ValueError(
375 "To clone a Sequential model, we expect at most one tensor as "
376 f"part of input_tensors. Received: input_tensors={input_tensors}"
377 )

File ~\anaconda3\lib\site-packages\tensorflow\python\trackable\base.py:205, in no_automatic_dependency_tracking.._method_wrapper(self, *args, **kwargs)
203 self._self_setattr_tracking = False # pylint: disable=protected-access
204 try:
--> 205 result = method(self, *args, **kwargs)
206 finally:
207 self._self_setattr_tracking = previous_value # pylint: disable=protected-access

File ~\anaconda3\lib\site-packages\keras\utils\traceback_utils.py:70, in filter_traceback..error_handler(*args, **kwargs)
67 filtered_tb = _process_traceback_frames(e.traceback)
68 # To get the full stack trace, call:
69 # tf.debugging.disable_traceback_filtering()
---> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb

File ~\AppData\Roaming\Python\Python39\site-packages\tensorflow_model_optimization\python\core\quantization\keras\quantize_wrapper.py:251, in QuantizeWrapperV2.build(self, input_shape)
249 def build(self, input_shape):
250 self._trainable_weights.extend(self.layer.trainable_weights)
--> 251 super(QuantizeWrapperV2, self).build(input_shape)

File ~\AppData\Roaming\Python\Python39\site-packages\tensorflow_model_optimization\python\core\quantization\keras\quantize_wrapper.py:110, in QuantizeWrapper.build(self, input_shape)
102 self.optimizer_step = self.add_weight(
103 'optimizer_step',
104 initializer=tf.keras.initializers.Constant(-1),
105 dtype=tf.dtypes.int32,
106 trainable=False)
108 self._weight_vars = []
109 for weight, quantizer in (
--> 110 self.quantize_config.get_weights_and_quantizers(self.layer)):
111 quantizer_vars = quantizer.build(weight.shape,
112 self._weight_name(weight.name), self)
114 self._weight_vars.append((weight, quantizer, quantizer_vars))

Cell In [48], line 6, in MyMaxPooling1DQuantizeConfig.get_weights_and_quantizers(self, layer)
5 def get_weights_and_quantizers(self, layer):
----> 6 return [(layer.kernel, LastValueQuantizer())]

AttributeError: 'CustomLayerMaxPooling1D' object has no attribute 'kernel'

The class MyMaxPooling1DQuantizeConfig is the following:

LastValueQuantizer = tfmot.quantization.keras.quantizers.LastValueQuantizer
MovingAverageQuantizer = tfmot.quantization.keras.quantizers.MovingAverageQuantizer

class MyMaxPooling1DQuantizeConfig(tfmot.quantization.keras.QuantizeConfig):
    def get_weights_and_quantizers(self, layer):
        return [(layer.kernel, LastValueQuantizer())]
    
    def get_activations_and_quantizers(self, layer):
        return [(layer.activation, MovingAverageQuantizer())]

    def set_quantize_activations(self, layer, quantize_activations):
        layer.activation = quantize_activations[0]
    
    def set_quantize_weights(self, layer, quantize_weights):
        layer.kernel = quantize_weights[0]

    def get_config(self):
        return {}

    def get_output_quantizers(self, layer):
        return []

The model along with the CustomLayerMaxPooling1D class is the following:


class CustomLayerMaxPooling1D(tf.keras.layers.MaxPooling1D):
    pass

new_model = tfmot.quantization.keras.quantize_annotate_model(tf.keras.Sequential([tfmot.quantization.keras.quantize_annotate_layer(CustomLayerConv1D(64, kernel_size=3, activation = 'relu', input_shape=(120,51)), MyConv1DQuantizeConfig()),
    tfmot.quantization.keras.quantize_annotate_layer(CustomLayerMaxPooling1D(pool_size=2), MyMaxPooling1DQuantizeConfig()),
    tfmot.quantization.keras.quantize_annotate_layer(CustomLayerConv1D(64, kernel_size=3, activation = 'relu'), MyConv1DQuantizeConfig()),
    tfmot.quantization.keras.quantize_annotate_layer(CustomLayerMaxPooling1D(pool_size=2), MyMaxPooling1DQuantizeConfig()),
    tfmot.quantization.keras.quantize_annotate_layer(CustomLayerLSTM(32, return_sequences=False, activation='relu'), MyLSTMQuantizeConfig()),
    tf.keras.layers.Dropout(0.2),
    tf.keras.layers.Dense(64, activation='relu'),
    tf.keras.layers.Dropout(0.2),
    tf.keras.layers.Dense(actions.shape[0], activation='softmax')    
]))

And this is how I quantize the model:

with tfmot.quantization.keras.quantize_scope({'MyLSTMQuantizeConfig': MyLSTMQuantizeConfig, 'CustomLayerLSTM': 
 CustomLayerLSTM, 'MyConv1DQuantizeConfig': MyConv1DQuantizeConfig, 'CustomLayerConv1D': CustomLayerConv1D,                                       'MyMaxPooling1DQuantizeConfig': MyMaxPooling1DQuantizeConfig, 'CustomLayerMaxPooling1D': CustomLayerMaxPooling1D}):

    # Use `quantize_apply` to actually make the model quantization aware.
    quant_aware_model = tfmot.quantization.keras.quantize_apply(new_model)

Apparently sth is wrong with the function def get_weights_and_quantizers in the class MyMaxPooling1DQuantizeConfig, but I am not aware on how to fix the problem. I tried replacing layer.kernel with [ ] to the return value of the function def get_weights_and_quantizers(self, layer) as well as replaced pass in place of layer.kernel = quantize_weights[0] in function def set_quantize_weights(self, layer, quantize_weights), but it returns ValueError: not enough values to unpack (expected 2, got 0).

new_model.summary() info:
image

@konstantinatopali konstantinatopali added the feature request feature request label Aug 16, 2023
@konstantinatopali konstantinatopali changed the title AttributeError: Attribute error 'CustomLayerMaxPooling1D' object has no attribute 'kernel' AttributeError: 'CustomLayerMaxPooling1D' object has no attribute 'kernel' Aug 16, 2023
@Xhark Xhark self-assigned this Aug 28, 2023
@Xhark
Copy link
Member

Xhark commented Aug 28, 2023

Can you add how CustomLayerMaxPooling1D is implemented?
If it doesn't have kernel, than you have to change the MyMaxPooling1DQuantizeConfig as:

class MyMaxPooling1DQuantizeConfig(tfmot.quantization.keras.QuantizeConfig):
    def get_weights_and_quantizers(self, layer):
        return [] # No kernel.
        # return [(layer.kernel, LastValueQuantizer())]
    
    def get_activations_and_quantizers(self, layer):
        return [(layer.activation, MovingAverageQuantizer())]

    def set_quantize_activations(self, layer, quantize_activations):
        layer.activation = quantize_activations[0]
    
    def set_quantize_weights(self, layer, quantize_weights):
        layer.kernel = quantize_weights[0]

    def get_config(self):
        return {}

    def get_output_quantizers(self, layer):
        return []

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request feature request
Projects
None yet
Development

No branches or pull requests

2 participants