Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(internal): provide wrapping for retried errors #4797

Merged
merged 9 commits into from Sep 23, 2021

Conversation

tritone
Copy link
Contributor

@tritone tritone commented Sep 23, 2021

This modifies the retry code to allow both context and service errors to be available when the call was retried until hitting a context deadline or cancelation.

This preserves the error string as it is currently put together by internal/annotate; for example a googleapi error from storage looks like this:

object.Attrs: retry failed with context deadline exceeded; last error: googleapi: got HTTP response code 503 with body: {"code":"503","message":{"error":{"message":"Retry Test: Caused a 503"}}}

Go 1.13 error semantics can be used to introspect the underlying context and service errors; here are some examples:

_, err = client.Bucket(bucketName).Object(objName).Attrs(timeoutCtx)
if err != nil {
	if e, ok := err.(interface{ Unwrap() error }); ok {
		wrappedErr := e.Unwrap()  // yields googleapi.Error type.
	}

	// errors.As allows unwrapping with access to googleapi.Error fields.
	var errAsGoogleapi *googleapi.Error
	if errors.As(err, &errAsGoogleapi) {
		log.Printf("error code: %v", errAsGoogleapi.Code)
	}

	isDeadlineExceeded := errors.Is(err, context.DeadlineExceeded)  // true
}

internal.Retry is used by the storage, datastore and bigquery clients, as well as integration tests for compute and pubsub. I want to make sure to check this with everyone affected.

Currently, if an operation is retried until it times out, we
give an error string that covers both the context error and
the service error. However, this was implemented before Go 1.13
error wrapping. With this change, the error string stays the same,
but we provide wrapping semantics so users can introspect context
errors and gRPC/HTTP errors from the returned error.
@tritone tritone requested a review from a team as a code owner September 23, 2021 09:04
@google-cla google-cla bot added the cla: yes This human has signed the Contributor License Agreement. label Sep 23, 2021
@tritone tritone requested a review from a team as a code owner September 23, 2021 17:23
@tritone tritone changed the title feat(internal): provide wrapping for retried errors [WIP] feat(internal): provide wrapping for retried errors Sep 23, 2021
Copy link
Contributor

@shollyman shollyman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The wrapping of the real error seems like the big improvement here. I like it.

internal/retry.go Outdated Show resolved Hide resolved
Copy link
Member

@codyoss codyoss left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@tritone tritone added the automerge Merge the pull request once unit tests and other checks pass. label Sep 23, 2021
@gcf-merge-on-green gcf-merge-on-green bot merged commit ce5f4db into googleapis:master Sep 23, 2021
@gcf-merge-on-green gcf-merge-on-green bot removed the automerge Merge the pull request once unit tests and other checks pass. label Sep 23, 2021
tritone added a commit to tritone/google-cloud-go that referenced this pull request Sep 23, 2021
This piggybacks on googleapis#4797 to allow storage to expose wrapped
service errors when a call retries without success until timeout
or cancellation.

I also updated all checks for context sentinel errors in storage
to use xerrors.Is instead of strict equality. Users of this
package should do the same. I'll update the documentation on
errors from this package in a subsequent PR.

We will have to bump the dependency on the root module before
merging this change I believe.

Fixes googleapis#4197
tritone added a commit to tritone/google-cloud-go that referenced this pull request Sep 24, 2021
Based on googleapis#4797, we should use xerrors.As for googleapi errors.
tritone added a commit to tritone/google-cloud-go that referenced this pull request Sep 24, 2021
Based on googleapis#4797, we should use xerrors.As for googleapi errors.
tritone added a commit to tritone/google-cloud-go that referenced this pull request Sep 24, 2021
Fix up type assertions with googleapi.Error to use xerrors.As.
In bigquery, this was only required in integration tests and in
the package docs.

Follows from googleapis#4797 and similar to some of the fixes in storage
in googleapis#4802 (storage required additional changes though).
codyoss pushed a commit that referenced this pull request Sep 27, 2021
Based on #4797, we should use xerrors.As for googleapi errors.
gcf-merge-on-green bot pushed a commit that referenced this pull request Oct 5, 2021
Fix up type assertions with googleapi.Error to use xerrors.As.
In bigquery, this was only required in integration tests and in
the package docs.

Follows from #4797 and similar to some of the fixes in storage
in #4802 (storage required additional changes though).
gcf-merge-on-green bot pushed a commit that referenced this pull request Oct 5, 2021
This piggybacks on #4797 to allow storage to expose wrapped
service errors when a call retries without success until timeout
or cancellation.

I also updated all checks for context sentinel errors in storage
to use xerrors.Is instead of strict equality. Users of this
package should do the same. I'll update the documentation on
errors from this package in a subsequent PR.

We will have to bump the dependency on the root module before
merging this change I believe.

Fixes #4197
@tritone tritone deleted the ctx-err-wrap branch June 16, 2023 21:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla: yes This human has signed the Contributor License Agreement.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants