-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable BigQueryIO write throttling detection #31253
Conversation
Tested with decreased quota (appendRow quota cap'd to 5GB/min) Both pipeline still failed, as the BigQuery service is severely throttled. Nevertheless autoscaling downscale is acting, better than before. Need tuning from Dataflow side to get the pipeline run smooth. Most importantly, currently the downscale decision won't be made until 3+3=6 min of pipeline run, which already cause workitem failing. |
if (!quotaError) { | ||
// This forces us to close and reopen all gRPC connections to Storage API on error, | ||
// which empirically fixes random stuckness issues. | ||
invalidateWriteStream(); | ||
allowedRetry = 5; | ||
} else { | ||
allowedRetry = 10; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
originally, it will retry 5 times (make API call 6 times) within (1.5^6-1)/(1.5-1) ~ 20s. Now, 10 retries happened in (1.5^8-1)/(1.5-1) + 20/2 ~ 90s
Stopping reviewer notifications for this pull request: review requested by someone other than the bot, ceding control |
...atform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/StorageApiWritesShardedRecords.java
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, left some comments
@Override | ||
public MetricName getName() { | ||
return name; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it make sense to also include the sub-counter names here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it makes sense. However name is a MetricsName object contains a namespace + name. It is not obvious how to concatenate sub-counter names into here so I left it as is
/** | ||
* A counter holding a list of counters. Increment the counter will increment every sub-counter it | ||
* holds. | ||
*/ | ||
static class NestedCounter implements Counter, Serializable { | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a nice abstraction. Should we include it under package org.apache.beam.sdk.metrics
instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also thought about it. Current plan is if there will be a second code path using this nested counter, then we should move it to common package (i.e. java core).
...atform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/StorageApiWritesShardedRecords.java
Show resolved
Hide resolved
...le-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/BigQuerySinkMetrics.java
Outdated
Show resolved
Hide resolved
...oogle-cloud-platform/src/test/java/org/apache/beam/sdk/io/gcp/bigquery/BigQueryUtilTest.java
Outdated
Show resolved
Hide resolved
What does each 3 mean? is there a way to get around it? |
This is Dataflow autoscaler strategy thing The first 3 min is that the first throttled signal from the backend appears to be 3 min after pipeline running. Example log:
then, there is downscale signal every 30 s. The second 3 min is due to downscale signal must be stable for 3 min then autoscaler will take action.
|
We may (and should) optimize the Dataflow autoscaler, this is an internal task (not Beam) |
I see, thanks for providing those details! this LGTM |
Please add a meaningful description for your change here
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
addresses #123
), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, commentfixes #<ISSUE NUMBER>
instead.CHANGES.md
with noteworthy changes.See the Contributor Guide for more tips on how to make review process smoother.
To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.