Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Concurrent fast writers could slow down readers a lot #168

Open
buybackoff opened this issue May 29, 2019 · 0 comments
Open

Concurrent fast writers could slow down readers a lot #168

buybackoff opened this issue May 29, 2019 · 0 comments

Comments

@buybackoff
Copy link
Member

buybackoff commented May 29, 2019

We use A scalable reader/writer scheme with optimistic retry and it works great until one thread writes faster than Thread.Spinwait(2) + write overhead. With 3 it is OK, with 1-2 a reader falls behind, and with 0 the reader cannot finish and has to retry a lot every read.

After writing completes the reader picks up, but the average performance is meaningless.

R: 7,624,810 -  Mops     | W: 44,395,042- 44.40 Mops
R: 14,948,724 - 7.32 Mops        | W: 86,879,552- 42.48 Mops
R: 16,739,824 - 1.79 Mops        | W: 130,164,724- 43.29 Mops
R: 28,969,333 - 12.23 Mops       | W: 164,352,363- 34.19 Mops
R: 30,091,630 - 1.12 Mops        | W: 214,434,625- 50.08 Mops
R: 31,123,402 - 1.03 Mops        | W: 265,461,224- 51.03 Mops
R: 53,628,243 - 22.50 Mops       | W: 292,383,667- 26.92 Mops
R: 55,032,757 - 1.40 Mops        | W: 341,827,791- 49.44 Mops
R: 56,703,991 - 1.67 Mops        | W: 386,726,436- 44.90 Mops
R: 58,344,120 - 1.64 Mops        | W: 431,354,549- 44.63 Mops
R: 59,942,750 - 1.60 Mops        | W: 479,392,079- 48.04 Mops
R: 61,948,521 - 2.01 Mops        | W: 521,987,499- 42.60 Mops
R: 95,847,100 - 33.90 Mops       | W: 536,870,912- 14.88 Mops
R: 107,733,712 - 11.89 Mops      | W: 567,682,949- 30.81 Mops
R: 110,024,428 - 2.29 Mops       | W: 609,088,924- 41.41 Mops
R: 111,896,536 - 1.87 Mops       | W: 651,177,773- 42.09 Mops
R: 114,889,352 - 2.99 Mops       | W: 678,798,887- 27.62 Mops
R: 118,157,346 - 3.27 Mops       | W: 704,057,085- 25.26 Mops
R: 121,674,560 - 3.52 Mops       | W: 727,441,943- 23.38 Mops
R: 124,579,592 - 2.91 Mops       | W: 757,663,770- 30.22 Mops
R: 127,908,310 - 3.33 Mops       | W: 785,043,712- 27.38 Mops
R: 130,916,682 - 3.01 Mops       | W: 812,736,546- 27.69 Mops
R: 133,863,491 - 2.95 Mops       | W: 843,006,234- 30.27 Mops
R: 136,926,119 - 3.06 Mops       | W: 871,619,644- 28.61 Mops
R: 139,549,793 - 2.62 Mops       | W: 907,048,548- 35.43 Mops
R: 142,155,461 - 2.61 Mops       | W: 940,610,733- 33.56 Mops
R: 143,609,786 - 1.45 Mops       | W: 986,714,626- 46.10 Mops
COMPLETE
Read after map complete:144312065
R: 173,557,933 - 29.95 Mops      | W: 1,000,000,000- 13.29 Mops
R: 229,166,969 - 55.61 Mops      | W: 1,000,000,000- 0.00 Mops
R: 282,067,073 - 52.90 Mops      | W: 1,000,000,000- 0.00 Mops
R: 337,812,718 - 55.75 Mops      | W: 1,000,000,000- 0.00 Mops
R: 393,673,720 - 55.86 Mops      | W: 1,000,000,000- 0.00 Mops
R: 449,502,726 - 55.83 Mops      | W: 1,000,000,000- 0.00 Mops
R: 505,312,696 - 55.81 Mops      | W: 1,000,000,000- 0.00 Mops
R: 561,149,549 - 55.84 Mops      | W: 1,000,000,000- 0.00 Mops
R: 617,032,405 - 55.88 Mops      | W: 1,000,000,000- 0.00 Mops
R: 672,894,037 - 55.86 Mops      | W: 1,000,000,000- 0.00 Mops
R: 728,635,347 - 55.74 Mops      | W: 1,000,000,000- 0.00 Mops
R: 784,298,039 - 55.66 Mops      | W: 1,000,000,000- 0.00 Mops
R: 840,862,159 - 56.56 Mops      | W: 1,000,000,000- 0.00 Mops
R: 896,937,257 - 56.08 Mops      | W: 1,000,000,000- 0.00 Mops
R: 952,781,999 - 55.84 Mops      | W: 1,000,000,000- 0.00 Mops
Read after finish:1000000000

**CouldReadDataStreamWhileWritingFromManyThreads**

 Case                |    MOPS |  Elapsed |   GC0 |   GC1 |   GC2 |  Memory
------               |--------:|---------:|------:|------:|------:|--------:
Write                |   36.56 | 27350 ms |  10.0 |  10.0 |  10.0 | 6152.027 MB
Read                 |   23.30 | 42914 ms |  10.0 |  10.0 |  10.0 | 4104.019 MB

The solution is to read-lock on order version and not on any mutation. TryAddLast never changes the order of data and we already throw if a cursor sees order change.

Append/TryAddLast should write data without incrementing count, which should be volatile, and after data is written commit it by incrementing the counter (same as in DataSpreads StreamLog). Cursor cannot move past the counter and won't see the written data before commit.

For AppendOnly series readers should not use lock/versions at all.

Reader lock is only needed for ISeries.TryFind/TryGetValue. Since we throw on out of order data in cursors we only need volatile count.

Versions should mean order versions. Any mutation operation must detect if their application could change order and increment nextOrderVersion. Readers spin on the order versions (ISeries methods) or throw (cursors).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant