-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handle concurrency rate limiting #21
Comments
@soapiestwaffles Can you confirm if this is progress? I am looking at add this feature and don't want to duplicate the work if you are already working on it. My plan is to use the method outlined in this issue, which is move the client instantiation into the Go routine and then add the backoff and retry logic into the client configuration. |
Initial implemented in pull request #23. @soapiestwaffles Let me know what you think and any changes or improvements. Unit tests successfully passing. |
When receiving a rate limit response from AWS, the application would exit with an error. This was an undesirable outcome. AWS has a feature to implement retrying using their `retry` [package][1]. By default there are a number of retryable conditions. The relevant useful cases are as follows: - Connection Errors - RequestTimeout, RequestTimeoutException - Throttling, ThrottlingException, ThrottledException, RequestThrottledException, TooManyRequestsException, RequestThrottled, SlowDown - RequestLimitExceeded, BandwidthLimitExceeded, LimitExceededException This means the default retryer will handle rate limiting and will not need to implicitly handle this case. It is also important to note the following about client rate limiting: > Generally you will always want to return new instance of a Retryer. > This will avoid a global rate limit bucket being shared across all > service clients. This means that the instantiation of the S3 client must be handled within each Go routine and cannot be shared as it was previously implemented. `AdaptiveMode` is the retry strategy that will be used: > AdaptiveMode provides an experimental retry strategy that expands on > the Standard retry strategy, adding client attempt rate limits. The > attempt rate limit is initially unrestricted, but becomes restricted > when the attempt fails with for a throttle error. The default, values for the `AdaptiveMode` is based on the `NewStandard` which is: - `DefaultMaxAttempts`: `3` - `DefaultMaxBackoff`: `20s` [1]: https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/aws/retry Fixes: #21 Signed-off-by: Michael Lorant <michael.lorant@nine.com.au>
When increasing the concurrency of the deletion tasks, AWS may send a rate limit response. This causes all other tasks to exit and an error reported.
I have attempted to address this in my own branch using the Failsafe package. While this implementation works, there is a better way.
The AWS SDK for Go has added the capability to implement a retry policy. This policy includes many behaviors including exponential backoff.
From reading the documentation, this looks relatively simple to implement.
Take note of the following warning:
This means we should move the instantiation of the S3 client into the Go routine so that the rate limit is not shared among each deletion thread.
The text was updated successfully, but these errors were encountered: