You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Feb 28, 2023. It is now read-only.
Asynchronous code seems to be all the rage these days. I figured out a way of implementing coroutines in Nebulet that appear to be normal threads to the user, but are, in fact, lightweight coroutines. This would allow the syscall interface to be completely asynchronous, but actually appear to be blocking. This would simplify writing applications for Nebulet and improve performance.
This would involve removing preemption for threads and each process would only run on a single core at a time (essentially making processes the defacto unit of true concurrency), eliding the need for expensive synchronization.
Gist
No preemption at the thread level. A process only executes on one cpu at a time.
Create threads normally, but thread switching code is injected at specific points, like external function calls and in some loops.
To run on multiple cpus, create multiple processes.
To the user, threads appear to be fully-preemptive, but under the hood, they are coroutines.
Can mark spots as thread switch locations and mark functions where no thread switches should be generated.
Advantages
May result in better overall performance.
No locks or atomics are necessary for tables or thread queues.
The syscall interface can appear to be blocking, but actually be asynchronous.
No language support required for coroutines.
Disadvantages
May constrict some usages.
Design Challenges
Requires some way of generating and saving new wasm stacks.
Requires a complete rewrite of the threading support in Nebulet.
Thoughts?
The text was updated successfully, but these errors were encountered:
would only allow a single cpu to run in a process at a time
wouldn't this mean that the number of processes that can run at one time is limited by the number of cpu's?
To run on multiple cpus, create multiple processes.
I guess this would negate the speed bump gained from not synchronizing threads since the process has to consider clones of itself that access same resources.
@AleVul I think I didn't describe that part well enough. It's not that each core gets pinned to a process, it's just that a process will only run on a single core at a time.
The second part would have to be determined when and if this idea is actually implemented. Since the processes could share their linear memory, any synchronization would be controlled by whatever code is running there, same as any normal multithreaded code
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Asynchronous code seems to be all the rage these days. I figured out a way of implementing coroutines in Nebulet that appear to be normal threads to the user, but are, in fact, lightweight coroutines. This would allow the syscall interface to be completely asynchronous, but actually appear to be blocking. This would simplify writing applications for Nebulet and improve performance.
This would involve removing preemption for threads and each process would only run on a single core at a time (essentially making processes the defacto unit of true concurrency), eliding the need for expensive synchronization.
Gist
Advantages
Disadvantages
Design Challenges
Thoughts?
The text was updated successfully, but these errors were encountered: