Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimization for forward pass during training #613

Open
TheSeriousProgrammer opened this issue Jan 27, 2021 · 1 comment
Open

Optimization for forward pass during training #613

TheSeriousProgrammer opened this issue Jan 27, 2021 · 1 comment

Comments

@TheSeriousProgrammer
Copy link

I'd like to thank you guys for giving such an awesome bNN library

I observed that the inference time in the optimized LCE libary is really quick , can that be brought during the training phase too?
Cause that will save a lot of resources

@koenhelwegen
Copy link
Contributor

Hi, great to hear you find Larq useful!

It is definitely possible to optimize training, currently the binary computations are actually performed in float precision. However, binarization would only apply to the forward pass - the backward pass still requires higher-precision computations and so the impact on overall training time would be much smaller than what we see for inference. It would also take significant effort, LCE is optimized for Cortex A platforms and optimized training code would need to run on GPUs and TPUs and therefore requires separate kernel implementations.

As the main motivation for BNNs is efficient inference rather than efficient training, we are completely focused on that for now. If someone in the community is excited about more efficient training, we would definitely welcome their contributions though!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants