You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is it possible to do post-training quantitzation with Parseq? I'm looking for ways to speed up inference time. I tried training a parseq-tiny model but lost about 13% absolute val accuracy.
I'm new to quantitization and am unsure about the types of models it benefits or which type of quantitization to use.
Thanks for any suggestions!
The text was updated successfully, but these errors were encountered:
Is it possible to do post-training quantitzation with Parseq? I'm looking for ways to speed up inference time. I tried training a parseq-tiny model but lost about 13% absolute val accuracy.
I'm new to quantitization and am unsure about the types of models it benefits or which type of quantitization to use.
Thanks for any suggestions!
Did you speed up inference time ? And did you quantize with post-training quantization ?
By default when the model is trained it will be trained in a quantization-aware training which further helps preserve the accuracy during quantization.
Hey there!
Is it possible to do post-training quantitzation with Parseq? I'm looking for ways to speed up inference time. I tried training a parseq-tiny model but lost about 13% absolute val accuracy.
I'm new to quantitization and am unsure about the types of models it benefits or which type of quantitization to use.
Thanks for any suggestions!
The text was updated successfully, but these errors were encountered: