Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Vastly simplify and accelerate memory management. #192

Open
mratsim opened this issue Feb 24, 2023 · 1 comment
Open

Vastly simplify and accelerate memory management. #192

mratsim opened this issue Feb 24, 2023 · 1 comment

Comments

@mratsim
Copy link
Owner

mratsim commented Feb 24, 2023

For a fresh implementation of a threadpool for high-speed cryptography (https://github.com/mratsim/constantine/tree/1dfbb8b/constantine/platforms/threadpool), I found a new design for low overhead memory management:

  • We can make the Flowvar result channel intrusive to the task.

Assuming we have in a full message passing environment, this is equivalent to sending back the task to the waiter once it is finished hence we are compatible with one of Weave's design goal.

With this, we might be able to completely remove the memory folder and the 2 levels caching structure via memory pool + lookaside list. This might accelerate machine learning algorithm like GEMM / Matrix multiplication as those are very cache sensitive and our memory pool does trigger many page faults.

@dumblob
Copy link

dumblob commented Feb 27, 2023

This is an interesting idea. Any preliminary measurements showing the difference in behavior in all/many tasks from the weave benchmark set (not just for cryptography computations)?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants