stuck at random accuracy for larq converted model #723
Replies: 2 comments 17 replies
-
@prapti1998 could you perhaps include a detailed model summary of the exact model you're using (the Larq version)? A fairly common mistake that leads to constant output with Larq models is when you have an output of a ReLU layer fed into a Larq binary layer. In this case, since the output of a ReLU is >= 0, the input is quantised to the constant 1.0. |
Beta Was this translation helpful? Give feedback.
-
Hi @AdamHillier . We have tried the techniques you mentioned above, but still cannot make the training phase to show some reasonable accuracy and loss values. What is your suggestion at this phase? How do you think we should approach debugging this issue? Thanks. |
Beta Was this translation helpful? Give feedback.
-
Hi,
I am training an Xception net architecture with larq quantised layers.
I am trying to use separable conv2d operations , but when I use the following configuration, the loss is not decreasing.
kwargs_d= dict(input_quantizer="ste_sign", pointwise_quantizer="ste_sign") larq.layers.QuantSeparableConv2D(256, (3, 3), padding='same', use_bias=False,**kwargs_d,pad_values=1.0, name='block3_sepconv1')
Also for the conv2d layers I am using the following config:
kwargs = dict(input_quantizer="ste_sign", kernel_quantizer="ste_sign", kernel_constraint="weight_clip") larq.layers.QuantConv2D(64, (3, 3), use_bias=False, **kwargs,name='block1_conv2')
The snippet of logs are attached( the loss is oscillating and the accuracy is stuck at a random value):
logs_correct.txt.txt
Also, I debugged my larq model more, and found that for the same input image the larq model's forward pass is giving "all zero(0.0)" output where as for the keras model (without larq) for forward pass there is a non-zero output.
Can anyone please tell me if there is something I am doing wrong or if I have missed anything?
I am using tf2.3 and the latest version of larq.
Thanks
Beta Was this translation helpful? Give feedback.
All reactions