You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'd like to thank you guys for giving such an awesome bNN library
I observed that the inference time in the optimized LCE libary is really quick , can that be brought during the training phase too?
Cause that will save a lot of resources
The text was updated successfully, but these errors were encountered:
It is definitely possible to optimize training, currently the binary computations are actually performed in float precision. However, binarization would only apply to the forward pass - the backward pass still requires higher-precision computations and so the impact on overall training time would be much smaller than what we see for inference. It would also take significant effort, LCE is optimized for Cortex A platforms and optimized training code would need to run on GPUs and TPUs and therefore requires separate kernel implementations.
As the main motivation for BNNs is efficient inference rather than efficient training, we are completely focused on that for now. If someone in the community is excited about more efficient training, we would definitely welcome their contributions though!
I'd like to thank you guys for giving such an awesome bNN library
I observed that the inference time in the optimized LCE libary is really quick , can that be brought during the training phase too?
Cause that will save a lot of resources
The text was updated successfully, but these errors were encountered: