Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable BigQueryIO write throttling detection #31253

Merged
merged 4 commits into from May 21, 2024
Merged

Conversation

Abacn
Copy link
Contributor

@Abacn Abacn commented May 10, 2024

Please add a meaningful description for your change here


Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:

  • Mention the appropriate issue in your description (for example: addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, comment fixes #<ISSUE NUMBER> instead.
  • Update CHANGES.md with noteworthy changes.
  • If this contribution is large, please file an Apache Individual Contributor License Agreement.

See the Contributor Guide for more tips on how to make review process smoother.

To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md

GitHub Actions Tests Status (on master branch)

Build python source distribution and wheels
Python tests
Java tests
Go tests

See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.

@Abacn Abacn closed this May 13, 2024
@Abacn Abacn reopened this May 13, 2024
@Abacn
Copy link
Contributor Author

Abacn commented May 13, 2024

Tested with decreased quota (appendRow quota cap'd to 5GB/min)

before:
image

after:
image

Both pipeline still failed, as the BigQuery service is severely throttled. Nevertheless autoscaling downscale is acting, better than before.

Need tuning from Dataflow side to get the pipeline run smooth. Most importantly, currently the downscale decision won't be made until 3+3=6 min of pipeline run, which already cause workitem failing.

if (!quotaError) {
// This forces us to close and reopen all gRPC connections to Storage API on error,
// which empirically fixes random stuckness issues.
invalidateWriteStream();
allowedRetry = 5;
} else {
allowedRetry = 10;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

originally, it will retry 5 times (make API call 6 times) within (1.5^6-1)/(1.5-1) ~ 20s. Now, 10 retries happened in (1.5^8-1)/(1.5-1) + 20/2 ~ 90s

@Abacn Abacn marked this pull request as ready for review May 14, 2024 18:58
@Abacn
Copy link
Contributor Author

Abacn commented May 14, 2024

R: @ahmedabu98 @JayajP

Copy link
Contributor

Stopping reviewer notifications for this pull request: review requested by someone other than the bot, ceding control

Copy link
Contributor

@ahmedabu98 ahmedabu98 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, left some comments

Comment on lines +1130 to +1133
@Override
public MetricName getName() {
return name;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it make sense to also include the sub-counter names here?

Copy link
Contributor Author

@Abacn Abacn May 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it makes sense. However name is a MetricsName object contains a namespace + name. It is not obvious how to concatenate sub-counter names into here so I left it as is

Comment on lines +1088 to +1093
/**
* A counter holding a list of counters. Increment the counter will increment every sub-counter it
* holds.
*/
static class NestedCounter implements Counter, Serializable {

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a nice abstraction. Should we include it under package org.apache.beam.sdk.metrics instead?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also thought about it. Current plan is if there will be a second code path using this nested counter, then we should move it to common package (i.e. java core).

@ahmedabu98
Copy link
Contributor

Most importantly, currently the downscale decision won't be made until 3+3=6 min of pipeline run, which already cause workitem failing

What does each 3 mean? is there a way to get around it?

@Abacn
Copy link
Contributor Author

Abacn commented May 18, 2024

Most importantly, currently the downscale decision won't be made until 3+3=6 min of pipeline run, which already cause workitem failing

What does each 3 mean? is there a way to get around it?

This is Dataflow autoscaler strategy thing

The first 3 min is that the first throttled signal from the backend appears to be 3 min after pipeline running. Example log:

F11 is throttled (fraction of time throttled = 0.2472). Recommend 75.28 threads instead of 100

then, there is downscale signal every 30 s.

The second 3 min is due to downscale signal must be stable for 3 min then autoscaler will take action.

  • Example recommendation < 3 min:
why:
Desire to downscale because overall work duration is 17h59m48.01785516525s and desired parallelism is 74 of which 100% is allocated to this pool, but there was a large decrease for only 2m30.000081865s, less than 3m
  • Example recommendation at 3 min:
why:
Downscaling because overall work duration is 18h15m50.3475206785s and desired parallelism is 65 of which 100% is allocated to this pool and there was a large decrease for more than 3m

@Abacn
Copy link
Contributor Author

Abacn commented May 18, 2024

We may (and should) optimize the Dataflow autoscaler, this is an internal task (not Beam)

@ahmedabu98
Copy link
Contributor

I see, thanks for providing those details! this LGTM

@Abacn Abacn merged commit 675dab2 into apache:master May 21, 2024
18 checks passed
@Abacn Abacn deleted the bqwritemetric branch May 21, 2024 15:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants