Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Comparison to Weave #7

Open
dumblob opened this issue Nov 1, 2021 · 14 comments
Open

Comparison to Weave #7

dumblob opened this issue Nov 1, 2021 · 14 comments

Comments

@dumblob
Copy link

dumblob commented Nov 1, 2021

This looks like a simple yet quite efficient implementation of work stealing. I wonder how this compares to a bit more modern work stealing framework Weave.

Before I spend hours of reading & understanding the inner details of Lace, could anyone make a rough comparison (of approaches taken as well as real measured performance). Weave has quite readable primer & documentation, so this comparison is more about the Lace side 😉.

@trolando
Copy link
Owner

trolando commented Nov 2, 2021

I'm not aware of Weave. As far as I can see one difference is that Weave is not for C? While Lace is for C?

You can easily compile Lace and run the fibonnaci benchmark and compare it with Weave if you want to compare the performance. I don't have time for this right now.

@dumblob
Copy link
Author

dumblob commented Nov 2, 2021

Weave shall compile to C.

Should you find some time to write down how the internals of Lace work and what are the trade-offs, please let me know. Thanks!

@dumblob
Copy link
Author

dumblob commented May 30, 2022

Any insights how Lace works?

@trolando
Copy link
Owner

I have written two papers about it back in 2014 when it was in active development and there are examples in the benchmarks folder. For example benchmarks/fib/fib-lace.c is a simple example.

You initialize lace with lace_start to start a bunch of workers that will then actively hold your CPU hostage until you use the RUN macro to start a task. Tasks are defined using macros like TASK_1(return_type, name, param_1_type, param_1_name) { code here }. In Lace tasks, you can then SPAWN new tasks to allow other workers to execute them and you use SYNC to obtain the result of the spawned task. (If the task was not stolen, then it's executed by the current worker.) You can also use CALL to directly execute a task instead of putting it on the local task stack for others to steal.

There are a bunch more advanced features like barriers and interrupts, they are used in my Sylvan project. You can also produce statistics on the performance of the load balancer. And you can temporarily stop the Lace workers and restart them using lace_suspend and lace_resume.

@trolando
Copy link
Owner

Internally as described in my early PhD work, Lace uses a special queue that is optimized for this particular task. It's the topic of one of the papers. The infrastructure is very similar to Wool, a project by Karl Filip Faxen in 2010. This in turn was inspired on Cilk, which used to be part of the GCC compiler for a while. In my opinion, Lace and Wool offer the best performance for fine-grained task-based parallelism. (fork-join semantics) but of course the downside is the abuse of C macros...

@trolando
Copy link
Owner

By the way you could probably easily check benchmarks if you have experience with Weave. I see here https://github.com/mratsim/weave/tree/master/benchmarks that several of these are from Cilk so probably quite similar to benchmarks implemented in Lace. Simply use ccmake to set LACE_BUILD_BENCHMARKS to ON should suffice to build the benchmarks...

@trolando
Copy link
Owner

Takes some effort to install this with outdated nim on Manjaro, had to install manual, then figure out how to run stuff, guessing that simply nim r benchmarks/nqueens/weave_nqueens.nim 14 is the appropriate call...

Scheduler:            Weave (eager flowvars)
Benchmark:            N-queens
Threads:              6
Time(us)              41513.84204101562
Max RSS (KB):         22540
Runtime RSS (KB):     8836
# of page faults:     36344
Problem size:         14x14 board with 14 queens
Solutions found:      365596

With Lace, I get:

[tom@ryve build]$ benchmarks/queens-lace -w 6 14
running queens 14 with 6 workers...
Result: Q(14) = 365596
Time: 4.037406

So Lace in 4 seconds, Weave in 41 seconds.

Other benchmark, fib... disappointing. Weave takes 10.852 seconds to compute fib 38. Lace does it in 0.24 seconds on my machine.

Maybe this is not how I'm supposed to run Weave, but I have no clue, never used nim before.

@dumblob
Copy link
Author

dumblob commented May 31, 2022

Wow you are quick! Thanks a lot for the description of Lace - admittedly I didn't read the paper, so this your overview made me willing to read it!

Btw. yes, nim is weird in that it requires some parameters to compile a "production build" and nim r is always slow (it is a debug build, forbids optimizations, etc.).

If I have any more questions, I will certainly come back 😉.

Btw. thanks for mentioning Sylvan - I might have a use case for it in a year or two.

@trolando
Copy link
Owner

Using nim c -d:danger ... to compile to a native executable...

[tom@ryve weave]$ ./benchmarks/nqueens/weave_nqueens 15
Scheduler:            Weave (eager flowvars)
Benchmark:            N-queens
Threads:              6
Time(us)              24365.958984375
Max RSS (KB):         23080
Runtime RSS (KB):     21588
# of page faults:     176675
Problem size:         15x15 board with 15 queens
Solutions found:      2279184

[tom@ryve build]$ benchmarks/queens-lace 15 -w 6
running queens 15 with 6 workers...
Result: Q(15) = 2279184
Time: 23.007802

So very similar performance for NQueens, Lace takes 23.01 seconds and Weave takes 24.37 seconds.

[tom@ryve weave]$ benchmarks/fibonacci/weave_fib 41
--------------------------------------------------------------------------
Scheduler:                                    Weave  (eager futures)
Benchmark:                                    Fibonacci
Threads:                                      6
Time(ms)                                      2662.712
Max RSS (KB):                                 25684
Runtime RSS (KB):                             23528
# of page faults:                             5943
--------------------------------------------------------------------------
n requested:                                  41
result:                                       165580141

[tom@ryve build]$ benchmarks/fib-lace 41 -w 6
fib(41) = 165580141
Time: 0.188222

For Fibonacci, which is very sensitive to overhead, Lace takes 188 ms and Weave takes 2663 ms.

@trolando
Copy link
Owner

In case you wonder if I cheat with fib, it is completely fine-grained:

TASK_1(int, pfib, int, n)
{
    if( n < 2 ) {
        return n;
    } else {
        int m,k;
        SPAWN( pfib, n-1 );
        k = CALL( pfib, n-2 );
        m = SYNC( pfib );
        return m+k;
    }
}

@dumblob
Copy link
Author

dumblob commented Jun 1, 2022

Very interesting! That sparked my interest in seeing even the assembly of both "contestants". The "variation" in performance seen for Weave is something I would like to investigate a bit.

@mratsim would this be something to investigate (maybe it has to do with the changes in Nim lately)?

Btw. @trolando do you use Clang or GCC compiler for both "contestants" on your machine?

@mratsim
Copy link

mratsim commented Jan 18, 2023

Unfortunately improving multithreading is not my priority at the moment, but I did review the code recently.

Here are Lace and Weave on my hardware (i9-9980XE 18 cores):
ksnip_20230118-111358
ksnip_20230118-111557

So 2x to 2.5x faster on nqueens but 2.5x to 10x (!) slower on fibonacci.

Weave has 2 configuration, "eager allocated futures" and "lazy allocated futures" with very different overheads (see the # of page faults).
In the eager case a future is allocated on the heap as soon as the task is scheduled. In the lazy case, it is allocated on the heap only if stolen, which significantly reduce overhead for fibonacci (a pure overhead bench)

The thing I'm not too sure about is that fib 40 used to take 180 to 220ms with lazy futures (see mratsim/weave#13) and 350ms-400ms with eager futures (see mratsim/weave#112) on the same machine image.

I retested fibril https://github.com/chaoran/fibril and perf didn't change there so unsure what's up. Fibril is the lowest overhead I benchmarked on fibonacci when developping Weave, but it took an approach that required platform-specific assembly as it is continuation-stealing/parent-stealing based:
ksnip_20230118-115038

Internally as described in my early PhD work, Lace uses a special queue that is optimized for this particular task. It's the topic of one of the papers. The infrastructure is very similar to Wool, a project by Karl Filip Faxen in 2010. This in turn was inspired on Cilk, which used to be part of the GCC compiler for a while. In my opinion, Lace and Wool offer the best performance for fine-grained task-based parallelism. (fork-join semantics) but of course the downside is the abuse of C macros...

Weave is based on the PhD of @aprell (see C impl: https://github.com/aprell/tasking-2.0) https://epub.uni-bayreuth.de/2990/ and also a collection of research here https://github.com/numforge/laser/blob/e23b5d6/research/runtime_threads_tasks_allocation_NUMA.md

The main internal characteristic is that it's based on fully private deques, and steal requests are actually work-sharing requests. Also, the theft policy, steal-one vs steal-half is adaptative, and for for-loop, loops are partitioned lazily instead of eagerly and so auto-adapt to the function being called and the resources available.

In terms of usage, it seems to me like Lace require structured parallelism, i.e. a spawn has to be synced before function exit (async/finish from Habanero, see also structured concurrency in IO runtimes https://en.wikipedia.org/wiki/Structured_concurrency). In Weave, this is the case for the lazy futures (because of alloca) but in the default case, futures can escape their allocating scope. This allows quite flexible usage in complex codebase, at the price of memory allocation overhead. Also Lace seems to keep idle threads spinning.

Some of the main changes made in Weave compared to the C code were:

@dumblob
Copy link
Author

dumblob commented Jan 21, 2023

Also Lace seems to keep idle threads spinning.

Potentially related: #9 (comment)

@trolando trolando reopened this Feb 28, 2023
@trolando
Copy link
Owner

trolando commented Mar 5, 2023

@dumblob what about this commit 67eaa42

The idea is that if there are no tasks at all, the threads are suspended, as soon as there is any task (offered via the RUN macro) the threads are resumed until all offered tasks are done.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants