Replies: 1 comment 2 replies
-
I believe by calling that function you get both the weights and the bias terms. I combined the tutorial network with your code and modified it slightly: import tensorflow as tf
import numpy as np
import larq
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
larq.layers.QuantDense(512, kernel_quantizer="ste_sign", kernel_constraint="weight_clip"),
larq.layers.QuantDense(10, input_quantizer="ste_sign", kernel_quantizer="ste_sign", kernel_constraint="weight_clip", activation="softmax"),
])
model.build(input_shape=(32, 32))
with larq.context.quantized_scope(True):
weights = model.layers[1].get_weights()
print(weights[0]) # (32, 512) binarized weights
print(weights[1]) # (512,) floating-point bias terms |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi Team,
I am trying to access binary bias values of the trained larq model, I am able to access weights with the below code in the tutorial. Can you please help me with bias values?
Beta Was this translation helpful? Give feedback.
All reactions