-
Notifications
You must be signed in to change notification settings - Fork 1.2k
feat: Support new 2026 retries #3379
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: version-3
Are you sure you want to change the base?
Changes from 14 commits
8d1db59
0d8d988
e869d9d
e834f9d
9e49095
901300e
17ca319
3f9005a
09bb847
87fcd9e
51f180f
1f58e9b
8ceaa1f
b90663c
aeb7e0c
88d17f6
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -35,6 +35,9 @@ def build_body(context) | |
|
|
||
| def parse_response(response) | ||
| response.data = parse_body(response.context) | ||
| rescue Json::ParseError => e | ||
| # make JSON parsing errors on 200-range responses retryable | ||
| response.error = Seahorse::Client::NetworkingError.new(e) | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is a new addition from last review so I have some questions:
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The SCP tests didn't cover the other parser handlers, but I believe they should also be updated. I'll make the changes. With these changes, the error message raised on the final attempt will be the NetworkingErr, but it wraps the parsing error message so the root cause will still be surfaced.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Is this how other SDKs are handling this case? Labeling it as a service error. Just curious 🤔 |
||
| end | ||
|
|
||
| def parse_body(context) | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, have we considered having a similar changelog entries for DynamoDB since they have specific retryable behavior or is that not needed?
Now that I think about it - are service-specific behaviors documented anywhere?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also have a question on the users who ARE already on
standardmode for theirretry_mode- do they feel any changes from this update?Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can I add a DynamoDB changelog entry even though only core was updated? We could extend the core changelog entry to mention that DynamoDB defaults are changed if new retry behavior is enabled. I don't believe any service specific behaviors are documented anywhere yet. Externally I think the blogpost should mention this new DynamoDB behavior, internally I could add more comments or documentation?
Customers who are already on standard and opt in to new retries will feel a difference. Due to the updated backoff timing, retries will be much faster. Throttling behavior will be the same, and due to the updated retry quota draining, customers will fail faster during sustained service errors, but this is intentional to help services recover faster.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I personally feel that blogpost is not enough for documentation. Not everyone will read blogpost releases nor
aws-sdk-core's CHANGELOG entries.Rethinking this...
This might be a good use case where we should have service-specific plugins. With plugins, you can be specific about this behavior + documentation. Now that I think about this - we might need to do something about the autogenerated config. See:
Above is what I see when I run codegen. Let's talk offline.
We should probably add a separate entry about this. The way I read the above entry is like: "Ok so if I don't use that env var, i'm still on old
standardretries mechanism"There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure I can try adding a DynamoDB plugin instead for retries.
Your original understanding is correct, customers who are already on standard and opt in to new retries will feel a difference. If they do not set the environment variable, there will not be any differences. New retry behavior is disabled by default and opt in only.