New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JetStream publish batching #375
Comments
Thanks for the report. I am seeing similar results. It looks like nats bench is sending the requests and not waiting for the responses one after the other whereas in our .NET implementation we are. When I batch the publish tasks I'm seeing similar figures e.g.: const int batch = 10_000;
for (int i = 0; i < msgCount / batch; ++i)
{
var tasks = new List<Task<PubAckResponse>>();
for (int j = 0; j < batch; j++)
{
Task<PubAckResponse> publishAsync = js.PublishAsync<byte[]>(subject: "test.subject", data).AsTask();
tasks.Add(publishAsync);
}
foreach (var task in tasks)
await task;
}
//
// Produced 100000 messages in 1417 ms; 71k msg/s ~ 138 MB/sec
//
// nats bench bar --js --pub 1 --size 2048 --msgs 100000
// Pub stats: 74,439 msgs/sec ~ 145.39 MB/sec
// Edit: and if we batch all of it we get the same result:
|
Yes I figured that it must have something to do with batching. I found
Anyway, shouldn't batching be implemented in the client? Similarly as it's in Kafka client (batch.size, linger.ms)? |
We should be able to implement that but I'm not sure what the API would look like in terms of collecting ACKs. |
Observed behavior
Is the .NET client 10x slower than the native one, or am I doing something wrong?
And the code I'm using (.NET 8):
Expected behavior
It should have similar performance
Server and client version
nats-server: v2.10.10
nats-cli: v0.1.1
NATS.Client.Core: v2.0.3
Host environment
No response
Steps to reproduce
No response
The text was updated successfully, but these errors were encountered: