Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remote-write: threshold to skip resharding should be higher #14044

Open
bboreham opened this issue May 3, 2024 · 12 comments
Open

Remote-write: threshold to skip resharding should be higher #14044

bboreham opened this issue May 3, 2024 · 12 comments

Comments

@bboreham
Copy link
Member

bboreham commented May 3, 2024

I saw a lot of log lines like this:

ts=2024-05-02T14:02:48.270112953Z level=warn msg="Skipping resharding, last successful send was beyond threshold" [...] lastSendTimestamp=1714658566 minSendTimestamp=1714658568

Context was that we wanted to feed data in a timely manner, so BatchSendDeadline had been reduced to 100ms.

The code that generates the message:

// We shouldn't reshard if Prometheus hasn't been able to send to the
// remote endpoint successfully within some period of time.
minSendTimestamp := time.Now().Add(-2 * time.Duration(t.cfg.BatchSendDeadline)).Unix()
lsts := t.lastSendTimestamp.Load()
if lsts < minSendTimestamp {
level.Warn(t.logger).Log("msg", "Skipping resharding, last successful send was beyond threshold", "lastSendTimestamp", lsts, "minSendTimestamp", minSendTimestamp)

is called every 10s (hard-coded), so if BatchSendDeadline is any less than 5s we stand some chance that we didn't even try to send within that interval.

Proposal

I suggest the check should be within 2 * time.Duration(t.cfg.BatchSendDeadline) + shardUpdateDuration.

@bboreham
Copy link
Member Author

bboreham commented May 3, 2024

I notice in a similar previous issue @csmarchbanks said #7124 (comment)

At very low remote write volumes it is very easy to go through multiple batch send durations without new samples coming in

Which matches the situation in this case (volume was about 500 series, scraped every 5s).
Anyway this judgement seems highly dependent on the value of BatchSendDeadline; is it unreasonable to set it to 100ms?

low volumes are unlikely to ever reshard above the minimum anyway

More context: the machine was occasionally under heavy CPU load; I believe this generated a backlog on the send queue.
(Sadly I don't have metrics to confirm this)

@kushalShukla-web
Copy link
Contributor

is this good @bboreham or do we need to wait for others review .

@bboreham
Copy link
Member Author

bboreham commented May 4, 2024

Best to comment on the PR within the PR itself.

@kushalShukla-web
Copy link
Contributor

okay @bboreham .

@cstyan
Copy link
Member

cstyan commented May 6, 2024

Which matches the situation in this case (volume was about 500 series, scraped every 5s).
Anyway this judgement seems highly dependent on the value of BatchSendDeadline; is it unreasonable to set it to 100ms?

I would have suggested/assumed people would drop the batch size max_samples_per_send a lot lower before dropping the BatchSendDeadline that low.

@bboreham
Copy link
Member Author

bboreham commented May 7, 2024

I don't think that helps. In my example, Prometheus scraped 509 series every 5 seconds; I wanted it to send those 509 series without waiting 5 seconds.
If I reduce max_samples_per_send from 2000 default to 100, say, it will send 500 of them, but I still want it to send all 509 series.

@cstyan
Copy link
Member

cstyan commented May 9, 2024

Personally I would still lower the batch size before the send deadline timeout, but even so I think guarding against excessive resharding checks is a valid change. Reviewing the PR again today.

@bboreham
Copy link
Member Author

I don't think I am understanding your point. What would you lower max_samples_per_send to, given my example?

@cstyan
Copy link
Member

cstyan commented May 16, 2024

I don't think I am understanding your point. What would you lower max_samples_per_send to, given my example?

Something below 509? Or even just 1000 or so and lower the send deadline to ~1s. I don't know exactly what your use case is but scraping a small amount of samples and then always sending all of them via remote write ASAP isn't really a situation we've designed for. Setting the send deadline to 100ms is just a workaround that's worked in your case.

This is separate from the issue of the the resharding check happening too often when the send deadline is < 5s, which I don't have any issue with merging a fix for.

@bboreham
Copy link
Member Author

lower the send deadline to ~1s

OK, that case still shows the issue I am talking about, because two times 1 second is way less than the 10s interval it checks at.

always sending all of them via remote write ASAP

That isn't what I asked for; I asked for:

in a timely manner

and

without waiting 5 seconds

Setting the send deadline to 100ms is just a workaround that's worked in your case.

I disagree, it matches what I wanted.

Bryan

@cstyan
Copy link
Member

cstyan commented May 18, 2024

Bryan and I discussed on Slack; over text we'd misunderstood each other. His config changes were definitely valid given the low scrape load he had. Remote write has some gaps when it comes to handling timely sending of data in that kind of a scenario, the hard coding of the reshard check ticker is just one of those gaps.

I'll be opening a few issues soon for some things we can try out, there are a number of people interested in taking on some smaller tasks in RW and those could be good first issues for them.

@kushalShukla-web
Copy link
Contributor

kushalShukla-web commented May 25, 2024

yeah after seeing and understanding code @bboreham assumption here is valid as we dont need to wait for 5 seconds , and yeah as @bboreham also asked for feeding data timely manner . and also we need to remove that hard coding of the resharding check ticker .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants