Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handle concurrency rate limiting #21

Closed
mikelorant opened this issue May 13, 2024 · 2 comments · Fixed by #23
Closed

Handle concurrency rate limiting #21

mikelorant opened this issue May 13, 2024 · 2 comments · Fixed by #23
Assignees
Labels
bug Something isn't working

Comments

@mikelorant
Copy link
Contributor

When increasing the concurrency of the deletion tasks, AWS may send a rate limit response. This causes all other tasks to exit and an error reported.

I have attempted to address this in my own branch using the Failsafe package. While this implementation works, there is a better way.

The AWS SDK for Go has added the capability to implement a retry policy. This policy includes many behaviors including exponential backoff.

From reading the documentation, this looks relatively simple to implement.

cfg, err := config.LoadDefaultConfig(context.TODO(), config.WithRetryer(func() aws.Retryer {
	return retry.AddWithMaxBackoffDelay(retry.NewStandard(), time.Second*5)
}))
client := s3.NewFromConfig(cfg)

Take note of the following warning:

Generally you will always want to return new instance of a Retryer. This will avoid a global rate limit bucket being shared between across all service clients.

This means we should move the instantiation of the S3 client into the Go routine so that the rate limit is not shared among each deletion thread.

@mikelorant mikelorant added the bug Something isn't working label May 13, 2024
@soapiestwaffles soapiestwaffles added the in progress Issue is being worked on label May 17, 2024
@mikelorant
Copy link
Contributor Author

@soapiestwaffles Can you confirm if this is progress? I am looking at add this feature and don't want to duplicate the work if you are already working on it.

My plan is to use the method outlined in this issue, which is move the client instantiation into the Go routine and then add the backoff and retry logic into the client configuration.

@mikelorant
Copy link
Contributor Author

Initial implemented in pull request #23.

@soapiestwaffles Let me know what you think and any changes or improvements.

Unit tests successfully passing.

@soapiestwaffles soapiestwaffles added reviewing and removed in progress Issue is being worked on labels May 27, 2024
soapiestwaffles pushed a commit that referenced this issue May 27, 2024
When receiving a rate limit response from AWS, the application would
exit with an error. This was an undesirable outcome.

AWS has a feature to implement retrying using their `retry`
[package][1].

By default there are a number of retryable conditions. The relevant
useful cases are as follows:

- Connection Errors
- RequestTimeout, RequestTimeoutException
- Throttling, ThrottlingException, ThrottledException,
  RequestThrottledException, TooManyRequestsException,
  RequestThrottled, SlowDown
- RequestLimitExceeded, BandwidthLimitExceeded, LimitExceededException

This means the default retryer will handle rate limiting and will not
need to implicitly handle this case.

It is also important to note the following about client rate limiting:

> Generally you will always want to return new instance of a Retryer.
> This will avoid a global rate limit bucket being shared across all
> service clients.

This means that the instantiation of the S3 client must be handled
within each Go routine and cannot be shared as it was previously
implemented.

`AdaptiveMode` is the retry strategy that will be used:

> AdaptiveMode provides an experimental retry strategy that expands on
> the Standard retry strategy, adding client attempt rate limits. The
> attempt rate limit is initially unrestricted, but becomes restricted
> when the attempt fails with for a throttle error.

The default, values for the `AdaptiveMode` is based on the
`NewStandard` which is:

- `DefaultMaxAttempts`: `3`
- `DefaultMaxBackoff`: `20s`

[1]: https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/aws/retry

Fixes: #21
Signed-off-by: Michael Lorant <michael.lorant@nine.com.au>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
Status: Done
Development

Successfully merging a pull request may close this issue.

2 participants