-
Hi, When I train the following architecture:
lq.models.summary() suggests that the model should be around 150KB, yet when I convert it to a TFLite file using lce.convert_keras_model(), the file ends up being 5MB, which is the size of the floating point equivalent of the network. In addition, when I benchmark this model on a Pi 4, it ends up being equivalent in inference time to a full precision network. Any idea why this is happening? |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
Hi @andynader, Our tflite support currently includes 2-dimensional binary convolutions but not 1-dimensional ones. It should work if you use a 2D convolution but with height (or width) set to |
Beta Was this translation helpful? Give feedback.
-
Yup, that worked, thank you so much! |
Beta Was this translation helpful? Give feedback.
Hi @andynader,
Our tflite support currently includes 2-dimensional binary convolutions but not 1-dimensional ones. It should work if you use a 2D convolution but with height (or width) set to
1
and filter size(3,1)
(or(1,3)
). TheUpSampling1D
layers becomeUpSampling2D
withsize=(1,x)
.