-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EventHubProducerClient stops sending batch and gives wrong maximumSizeInBytes in Exception #10682
Comments
I can mitigate the issue if I remake the Following is the code I am using:
Is there a config that would let me control for how long the |
Thanks for reporting this issue @shubhambhattar. @srnagar can you please investigate? /cc @conniey |
I've created a small sample code to reproduce it. The above shared code is not complete and cannot be run as it is. This one can be run as it is just by including the dependency (version
The corresponding logs are:
This will probably be helpful. I've simulated the case where it takes more than 30 mins (used 31, just to be safe) to completely fill up the batch and send it. The difference between the time when the batch is first initialized and the time the batch completely fills up has to be > 30 mins. Do note the logs at the end which says its not a TimeoutException but rather says that the size is greater than 256 KB which shouldn't be the case given the max batch size when the batch was created is assumed to be 1022 KB. |
@shubhambhattar I am looking into this issue. Thanks for providing a reproducible sample. I will have an update soon. |
Describe the bug
EventHubProducerClient stops sending batch and gives different
maximumSizeInBytes
in Exception then what is supposed to be.Exception or Stack Trace
To Reproduce
Steps to reproduce the behavior:
Start sending messages with no
maximumSizeInBytes
value supplied (Library automatically fills it as1022
KB) and simulate low traffic where it takes 30-35 mins to fill up a batch. Code being used is the same as shown here: https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/eventhubs/azure-messaging-eventhubs#create-an-event-hub-producer-and-publish-events and library version is5.0.3
Code Snippet
Expected behavior
If the message is too big then instead of the Exception, the already created batch should first be sent to EventHub and then a fresh batch is supposed to be created in which the message should be inserted. Instead the message payload size large exception is being thrown.
Additional context
This is also only happening during low traffic when it takes a lot of time to fill up a batch (with default size being taken as 1022KB). This has occured twice around the same time frame where traffic is low and successive
eventHubProducerClient.send(eventDataBatch)
had> 30
mins time gap.Also want to point out that when I am not giving a
maximumBatchSize
in bytes (while creating a EventDataBatch), its automatically taken as1022
KB and suddenly in the stack trace, the size is given to be256
KB instead of1022
KB.From what I could figure out, the
maximumLinkSize
is taken asmaximumSizeInBytes
. Is it the case that if link is broken, it'll fallback to256
KB and since the already created batch size is greater than256
KB, we're getting this error?I got this from this method in
EventHubProducerAsyncClient
class' following method:Library version is:
Information Checklist
Kindly make sure that you have added all the following information above and checkoff the required fields otherwise we will treat the issuer as an incomplete report
The text was updated successfully, but these errors were encountered: