Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BadHttpRequestException: Reading the request body timed out due to data arriving too slowly #4707

Closed
clement911 opened this issue Oct 16, 2018 · 137 comments
Labels
affected-medium This issue impacts approximately half of our customers area-networking Includes servers, yarp, json patch, bedrock, websockets, http client factory, and http abstractions enhancement This issue represents an ask for new feature or an enhancement to an existing one feature-kestrel severity-nice-to-have This label is used by an internal tool
Milestone

Comments

@clement911
Copy link

We have an asp.net core 2.1 app running on .net 4.7 in azure web app service.

Recently we started getting A LOT of the following error:

Microsoft.AspNetCore.Server.Kestrel.Core.BadHttpRequestException: Reading the request body timed out due to data arriving too slowly. See MinRequestBodyDataRate.

Our app is under pretty constant heavy load (from incoming webhooks mostly). During big spikes, we tend to see more of these.

We used to run on asp.net core 2.0 / .net 4.6.1 for many months and we never saw that error before.
It seems to have started happening following our recent upgrade to asp.net core 2.1 / .net 4.7.

We'd like to get to the bottom of this issue, but we are not even sure where to start.
Any thoughts?

@Tratcher
Copy link
Member

@sebastienros have you worked up an FAQ wiki yet?

@davidfowl
Copy link
Member

I wonder if it's because if thread pool starvation... Is your application fully async? Do you have any other logs from Kestrel? Can you capture a dump or a profile? Is the application CPU bound or IO bound?

@clement911
Copy link
Author

It just happened a couple of times just earlier, usually it's happening for several successive requests in a short time.
I'm monitoring cpu, memory, thread count and a bunch of other metrics in Azure monitor and it all looks healthy.
So even when there is no apparent bottleneck, and the app is very responsive, it can still happen occasionally.
Yes it is fully async.

Do you have any other logs from Kestrel?

No other warnings/errors. Probably info level but I'm not persisting those, although there is probably a way to get these.

Can you capture a dump or a profile?

I should be able to capture a dump from the azure portal if you think this will help?
Not sure what a profile is?

@clement911
Copy link
Author

Any thoughts on how to get to the bottom of this issue?

We're not sure if the errors are caused by legit requests.
Is it possible find out the url that was requested when this message gets generated?

We have been getting a couple of hundreds of these per day, even though all metrics look healthy.

Any particular log source that I can turn on in azure/asp.net core maybe?

The stack trace doesn't give us much.

Exception: Microsoft.AspNetCore.Server.Kestrel.Core.BadHttpRequestException: Reading the request body timed out due to data arriving too slowly. See MinRequestBodyDataRate.
   at Microsoft.AspNetCore.Server.Kestrel.Core.BadHttpRequestException.Throw(RequestRejectionReason reason)
   at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.Http1MessageBody.d__4.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.IO.Pipelines.PipeCompletion.ThrowLatchedException()
   at System.IO.Pipelines.Pipe.GetReadResult(ReadResult& result)
   at System.IO.Pipelines.Pipe.TryRead(ReadResult& result)
   at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.Http1MessageBody.OnConsumeAsync()

@clement911
Copy link
Author

There was just one log entry with a slightly better call stack including some of our calling code.
The stack is pointing to our webhook action.
In order to verify the webhook is valid, we must hash the request body and we use the following code.
Is there anything wrong with this code?

[HttpPost]
        public async Task<OkResult> Webhook()
        {
            var memoryStream = new MemoryStream();
            await Request.Body.CopyToAsync(memoryStream);
            bool isValidRequest = await AuthorizationService.IsAuthenticWebhook(Request.Headers, memoryStream, _shopifyLibOptions.SecretKey);
            if (!isValidRequest)
            {
                throw new UnauthorizedAccessException("This request is not an authentic webhook request.");
            }
            memoryStream.Position = 0;
            string body = new StreamReader(memoryStream).ReadToEnd();

        //omitted code here unrelated to asp.net

            return Ok();
        }

The call stack is pointing to the line
bool isValidRequest = await AuthorizationService.IsAuthenticWebhook(Request.Headers, memoryStream, _shopifyLibOptions.SecretKey);
Although in my experience, line numbers in call stack are sometimes a bit off and I wonder if the real issue is the line before await Request.Body.CopyToAsync(memoryStream);

Then the stack below is

Exception: Microsoft.AspNetCore.Server.Kestrel.Core.BadHttpRequestException: Reading the request body timed out due to data arriving too slowly. See MinRequestBodyDataRate.
   at Microsoft.AspNetCore.Server.Kestrel.Core.BadHttpRequestException.Throw(RequestRejectionReason reason)
   at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.Http1MessageBody.d__4.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.IO.Pipelines.PipeCompletion.ThrowLatchedException()
   at System.IO.Pipelines.Pipe.GetReadResult(ReadResult& result)
   at System.IO.Pipelines.Pipe.GetReadAsyncResult()
   at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.MessageBody.d__22.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpRequestStream.d__23.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at LINE FROM OUR CODE

@Tratcher
Copy link
Member

.ReadToEnd(); is a red flag, that's blocking sync IO. This data rate error has often been reported in applications that are having trouble with too many blocked threadpool threads. In this case there's an easy ReadToEndAsync() replacement. Have you checked for thread starvation issues elsewhere?

@clement911
Copy link
Author

But the stream here is a memory stream. That's why it's using the sync version. No I/O right?

The number of threads is low and metrics are healthy.
This issue seems quite sporadic but we get these errors in batches of 5 or 10...
Could it be caused by the GC causing a pause?

@Tratcher
Copy link
Member

Ah, you got me. I wasn't reading closely enough.

@davidfowl
Copy link
Member

How big are the posts?

@clement911
Copy link
Author

They are pretty small json payloads. Order, Customer, Product, this kind of thing...

@davidfowl
Copy link
Member

What size is pretty small?

@clement911
Copy link
Author

Typically under 10k characters, although there a few outliers over 100k characters.
We get a lot of requests though. Roughly 500k per day.

@clement911
Copy link
Author

Are there any further logs that can be turned on to get to the bottom of this issue?
This is really polluting our log files and we're still not sure why it occurs in the first place.

@clement911
Copy link
Author

We appear to have gotten rid of the issue.
We are not entirely sure but most likely it was caused by some code accessing the request object from a background thread.

@davidfowl
Copy link
Member

We are not entirely sure but most likely it was caused by some code accessing the request object from a background thread.

Using the IHttpContextAccessor? What did that code look like?

@clement911 clement911 reopened this Oct 29, 2018
@clement911
Copy link
Author

Sorry I had to re-open.
I thought it was fixed after our deployment on Sunday evening, but the absence of errors was simply due to the very low activity at this time of the week.
Once Monday kicked in, this error started flooding again.

David, that deployment included a fix related to the IHttpContextAccessor that you help me resolved here. I incorrectly thought this might have fixed this issue as well.

The call stack varies slightly when the error is captued in the log. Usually only asp.net frames, but it also sometimes happen in this new piece of code, which is very similar to the one I posted above handling webhooks:

Simplified a bit for clarity:

using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Http.Extensions;
using Microsoft.AspNetCore.Http.Internal;
using Microsoft.Extensions.Logging;
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace Middlewares
{
    public class RequestLogScopeMiddleware
    {
        private readonly RequestDelegate _next;
        private readonly ILogger<RequestLogScopeMiddleware> _logger;

        public RequestLogScopeMiddleware(RequestDelegate next, ILogger<RequestLogScopeMiddleware> logger)
        {
            _next = next;
            _logger = logger;
        }

        public async Task Invoke(HttpContext context)
        {
            using (_logger.BeginScope(await GetHttpInfo(context)))
            {
                await _next(context);
            }
        }

        private async Task<HttpInfo> GetHttpInfo(HttpContext ctx)
        {
            try
            {
                var req = ctx.Request;
                req.EnableRewind();
                string body = await new StreamReader(req.Body).ReadToEndAsync();
                req.Body.Position = 0;
                try
                {
                    body = JsonConvert.SerializeObject(JsonConvert.DeserializeObject(body), Formatting.Indented, new JsonSerializerSettings { NullValueHandling = NullValueHandling.Ignore });
                }
                catch
                {
                    //body may not be json
                }
                return new HttpInfo(req.Method, req.GetDisplayUrl(), body, string.Join(Environment.NewLine, req.Headers.Select(kv => $"{kv.Key}: {kv.Value}")));
            }
            catch (Exception ex)
            {
                _logger.LogWarning(ex, "Failed to extract http info from request");
                return null;
            }
        }
    }
}

Logged error:

Failed to extract http info from request

    HttpInfo: 

    Exception: Microsoft.AspNetCore.Server.Kestrel.Core.BadHttpRequestException: Reading the request body timed out due to data arriving too slowly. See MinRequestBodyDataRate.
   at Microsoft.AspNetCore.Server.Kestrel.Core.BadHttpRequestException.Throw(RequestRejectionReason reason)
   at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.Http1MessageBody.d__4.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.IO.Pipelines.PipeCompletion.ThrowLatchedException()
   at System.IO.Pipelines.Pipe.GetReadResult(ReadResult& result)
   at System.IO.Pipelines.Pipe.TryRead(ReadResult& result)
   at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.Http1MessageBody.OnConsumeAsync()

I'm pretty sure that it is related to await new StreamReader(req.Body).ReadToEndAsync();

@clement911
Copy link
Author

Here is another stack pointing clearly showing the reading of the stream is the issue:

Error from Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddleware:
    An unhandled exception has occurred while executing the request.

    HttpInfo: 

    Exception: Microsoft.AspNetCore.Server.Kestrel.Core.BadHttpRequestException: Reading the request body timed out due to data arriving too slowly. See MinRequestBodyDataRate.
   at Microsoft.AspNetCore.Server.Kestrel.Core.BadHttpRequestException.Throw(RequestRejectionReason reason)
   at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.Http1MessageBody.d__4.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.IO.Pipelines.PipeCompletion.ThrowLatchedException()
   at System.IO.Pipelines.Pipe.GetReadResult(ReadResult& result)
   at System.IO.Pipelines.Pipe.ReadAsync(CancellationToken token)
   at System.IO.Pipelines.Pipe.DefaultPipeReader.ReadAsync(CancellationToken cancellationToken)
   at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.MessageBody.d__21.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpRequestStream.d__21.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Microsoft.AspNetCore.WebUtilities.FileBufferingReadStream.d__35.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at System.IO.StreamReader.d__97.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at System.IO.StreamReader.d__62.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Middlewares.RequestLogScopeMiddleware.d__5.MoveNext()

@davidfowl
Copy link
Member

@halter73 any ideas?

PS: are you using that middleware in production? It’s looks terribly inefficient.

@clement911
Copy link
Author

Yes we are using this in prod.
We haven't suffered any performance issues but I'm listening if you have ideas on how to make this more performant.
Having a log of the body is very useful for investigating why a given request had errors...

@davidfowl
Copy link
Member

Having a log of the body is very useful for investigating why a given request had errors...

Do you have structured logging in place? You don't need to attach this much information to the scope, you just need enough information attached to every request so that you can find all correlated logs. You would log the body in chunks and then look at the entire payload by looking at all logs for a specific request id.

We haven't suffered any performance issues but I'm listening if you have ideas on how to make this more performant.

It's a DoS waiting to happen. I can just send a large payload to your application and boom, out of memory. (see https://github.com/davidfowl/AspNetCoreDiagnosticScenarios/blob/master/Scenarios/Controllers/BigJsonInputController.cs for an example). It'll also be one of the ASP.NET Core no-no (see https://github.com/davidfowl/AspNetCoreDiagnosticScenarios/blob/master/AspNetCoreGuidance.md#avoid-reading-the-entire-request-body-or-response-body-into-memory)

Having a log of the body is very useful for investigating why a given request had errors...

Yes, I've seen it enough times now that I think we should build something more efficient out of the box #3700

@clement911
Copy link
Author

Do you have structured logging in place? You don't need to attach this much information to the scope, you just need enough information attached to every request so that you can find all correlated logs. You would log the body in chunks and then look at the entire payload by looking at all logs for a specific request id.

I guess you could say it is structured, yes.
Logging the body in chunks? I'm not too sure what you mean?
Does that mean that I have to reconstruct the body from multiple log entries?
It certainly would be much simpler to have a single entry with the full body.

Also, the vast majority of requests we get are webhooks, that can take various shape (but always in json format) and we must validate that they are authentic by hashing the body.
(see the Webhook() method I posted above)
So we can't use [FromBody] because the payload can differ a lot from request to request.

It's a DoS waiting to happen. I can just send a large payload to your application and boom, out of memory.

I see how this could be a problem.
Should I use something like BigContentManualGood() from your linked sample?
As far as I can tell it still loads the entire json in memory, into a JObject, so I fail to see how this is different?
I guess there must be some sensible max-request-size limit default as well in asp.net core?

Going back to the original issue, I just wanted to point out that the pattern we're seeing is that these errors occur in batch.
There won't be any issues from a while (5 minutes or even up to a couple of hours) and then all of sudden we get many times that error in a rapid succession (say between 3 and 10+).
That may be a useful piece of information.
This got me thinking that it might be caused by a longer that usual GC pause... Just an idea...

@clement911
Copy link
Author

What is there a difference between the following two?

await new StreamReader(Request.Body).ReadToEndAsync();
await new HttpRequestStreamReader(Request.Body, Encoding.UTF8).ReadToEndAsync();

@clement911
Copy link
Author

We deployed yet again today to try to fix this.
We do not collect the body anymore when logging, based on @davidfowl 's good comments about perf and security.
We removed all usages of HttpAccessor and we are left with the only other piece of code that reads the entire body and causes the error, in our Webhook() action, which now looks like this:

        [HttpPost]
        public async Task<OkResult> Webhook()
        {
            var body = await new HttpRequestStreamReader(Request.Body, Encoding.UTF8).ReadToEndAsync();
            bool isValidRequest = AuthorizationService.IsAuthenticWebhook(Request.Headers, body, _options.SecretKey);
            if (!isValidRequest)
            {
                throw new UnauthorizedAccessException("This request is not an authentic webhook request.");
            }
            //handle webhook
            return Ok();
        }

Using HttpRequestStreamReader hasn't fixed the problem.

We also wanted to rule temporary thread pool exhaustion (even though the thread count counter in the azure portal looks ok) because we thought that the threadpool might take time to ramp up threads, and maybe a spike of requests might cause temporary starvation.
But, it looks like there are plenty of threads and the error is still happening...

public override void OnException(ExceptionContext context)
{
    context.ExceptionHandled = true;
    ThreadPool.GetAvailableThreads(out var workerThreads, out var completionPortThreads);
    ThreadPool.GetMaxThreads(out var maxWorkerThreads, out var maxCompletionPortThreads);
    _logger.LogError(context.Exception, $"Error in global mvc exception filter. Available Threads = ({workerThreads:N0}/{maxWorkerThreads:N0}, {completionPortThreads:N0}/{maxCompletionPortThreads:N0}).");
}
Error in global mvc exception filter. Available Threads = (32,757/32,767, 1,000/1,000).

    Exception: Microsoft.AspNetCore.Server.Kestrel.Core.BadHttpRequestException: Reading the request body timed out due to data arriving too slowly. See MinRequestBodyDataRate.
   at Microsoft.AspNetCore.Server.Kestrel.Core.BadHttpRequestException.Throw(RequestRejectionReason reason)
   at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.Http1MessageBody.d__4.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.IO.Pipelines.PipeCompletion.ThrowLatchedException()
   at System.IO.Pipelines.Pipe.GetReadResult(ReadResult& result)
   at System.IO.Pipelines.Pipe.TryRead(ReadResult& result)
   at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.Http1MessageBody.OnConsumeAsync()

We are running out of ideas.

@tstuts
Copy link

tstuts commented Nov 1, 2018

@clement911 What is your App Service scale?

We are having what looks like the same issue. ASP.NET Core 2.1 app in an Azure App Service (Windows) deployment with similar behavior - fine for a few minutes, then a group of connections all throw this exception, then fine again.

All request are POSTs with a JSON body that is being read and deserialized in a middleware.

@clement911
Copy link
Author

Hi @tstuts,
It's a deployment with 3 to 5 S2 instances and we get a ton of requests, mostly webhooks.
Several 100k per day.

I'm not sure if it is related to that issue, but we also noticed we have been getting a lot more of the following errors. Nowhere near as much, but we used to maybe get it once a day and now we get it a lot more, and I feel like they might be happening around the same time as the orginal issue.

Warning from Microsoft.AspNetCore.Server.Kestrel:
    Heartbeat took longer than "00:00:01" at "11/01/2018 19:48:02 +00:00".

@davidfowl
Copy link
Member

That usually means there's thread pool starvation... Hmm

@clement911
Copy link
Author

@davidfowl I don't understand how this is possible given that ThreadPool.GetAvailableThreads is reporting there are plenty of threads available?
We don't create manual threads.
We have a few Task.Run() and also several periodic background jobs scheduled via Observable.Timer, which uses the ThreadPool under the hood.

Here is a screenshot from Azure Portal showing our thread count over the last 12 hours.
I don't believe 200+ threads is excessive?

image

@Tratcher
Copy link
Member

Tratcher commented Nov 1, 2018

200 threads is a lot more than you have cores. If those threads are all active then some of them may not be getting enough CPU time (similar to thread pool starvation). We're making some changes in the data rate timers to mitigate these scenarios. aspnet/KestrelHttpServer#3070. That said, your system is likely still overtaxed.

@Bit-Shifts
Copy link

Having the same issue as well.

@highfield
Copy link

For those who are still experiencing the issue, did you read my post? #4707 (comment)

As today, I had NO issues since over one year on two different servers.

@mohitvaid
Copy link

mohitvaid commented May 16, 2020

Any update on this issue, I am also facing the same exception. The request is pretty small but kestrel is throwing 408 with message "request arriving too slow".

"message": "Reading the request body timed out due to data arriving too slowly. See MinRequestBodyDataRate.", "exception": {"Type":"Microsoft.AspNetCore.Server.Kestrel.Core.BadHttpRequestException", "StatusCode":408, "TargetSite":"Void Throw(Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.RequestRejectionReason)", "StackTrace":"   at Microsoft.AspNetCore.Server.Kestrel.Core.BadHttpRequestException.Throw(RequestRejectionReason reason)\r\n   at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.Http1ContentLengthMessageBody.ReadAsyncInternal(CancellationToken cancellationToken)\r\n   at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpRequestStream.ReadAsyncInternal(Memory`1 buffer, CancellationToken cancellationToken)\r\n   at Microsoft.AspNetCore.WebUtilities.FileBufferingReadStream.ReadAsync(Byte[] buffer, Int32 offset, Int32 count, CancellationToken cancellationToken)\r\n   at Microsoft.AspNetCore.WebUtilities.StreamHelperExtensions.DrainAsync(Stream stream, ArrayPool`1 bytePool, Nullable`1 limit, CancellationToken cancellationToken)\r\n   at Microsoft.AspNetCore.Mvc.Formatters.NewtonsoftJsonInputFormatter.ReadRequestBodyAsync(InputFormatterContext context, Encoding encoding)\r\n   at Microsoft.AspNetCore.Mvc.ModelBinding.Binders.BodyModelBinder.BindModelAsync(ModelBindingContext bindingContext)\r\n   at Microsoft.AspNetCore.Mvc.ModelBinding.ParameterBinder.BindModelAsync(ActionContext actionContext, IModelBinder modelBinder, IValueProvider valueProvider, ParameterDescriptor parameter, ModelMetadata metadata, Object value)\r\n   at Microsoft.AspNetCore.Mvc.Controllers.ControllerBinderDelegateProvider.<>c__DisplayClass0_0.<<CreateBinderDelegate>g__Bind|0>d.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.<InvokeInnerFilterAsync>g__Awaited|13_0(ControllerActionInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)\r\n   at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeNextExceptionFilterAsync>g__Awaited|25_0(ResourceInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)\r\n   at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.Rethrow(ExceptionContextSealed context)\r\n   at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted)\r\n   at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeNextResourceFilter>g__Awaited|24_0(ResourceInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)\r\n   at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.Rethrow(ResourceExecutedContextSealed context)\r\n   at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted)\r\n   at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeFilterPipelineAsync>g__Awaited|19_0(ResourceInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)\r\n   at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeAsync>g__Logged|17_1(ResourceInvoker invoker)\r\n   at Microsoft.AspNetCore.Routing.EndpointMiddleware.<Invoke>g__AwaitRequestTask|6_0(Endpoint endpoint, Task requestTask, ILogger logger)\r\n

@xcl-net
Copy link

xcl-net commented Jun 20, 2020

    [HttpPost]
    [ProducesResponseType(typeof(BaseResponse<bool>), 1)]
    public async Task<BaseResponse> SyncMsg()
    {

        string data = string.Empty;
        try
        {
            using (var sr = new StreamReader( Request.Body))
            {
                data = await sr.ReadToEndAsync();

                //todo: lenth more then 1024, Decompress
                data = StringHandleHelper.Decompress(data);

                var url = ConfigurationManager.GetValue("CallBackUrl");

#pragma warning disable CS4014 // 由于此调用不会等待,因此在调用完成前将继续执行当前方法
Task.Factory.StartNew((obj) =>
{
_messageService.SyncMsg((string)obj, url);
}, data);
#pragma warning restore CS4014 // 由于此调用不会等待,因此在调用完成前将继续执行当前方法

                var j_data = JToken.Parse(data);
                var type = j_data["type"].ToObject<PcWxMsgType>();
                var wxid = j_data["wxid"].ToObject<string>();

                return Response2WeChatRobot(wxid, type);
            }
        }
        catch (Exception ex)
        {
            return new BaseResponse() { Code = -2, Msg = "站点:" + ex.Message };
        }
    }

I

I've seen all the discussions. Is there an effective solution now

@mohitvaid
Copy link

Is this going to get fixed in the upcoming release?

@kenyeung128
Copy link

Hi all,
We're starting a new project that involves again saving webhooks to the database for further processing.
We expect small payloads and but a high number of requests.

Is the following code optimal?

[HttpPost]
        public async Task<OkResult> Webhook()
        {
            var body = await new HttpRequestStreamReader(Request.Body, Encoding.UTF8).ReadToEndAsync();
            //save to Db asynchronously?
            return Ok();
        }

@clement911 if moving to DB processing first would solve the ReadToEndAsync issue?

or else you tried using structured class instead of reading the object from Stream instead?

@Gopichandar
Copy link

Gopichandar commented Aug 28, 2020

Ok, I'm started reading this thread an hour ago. Now Im here at the end of thread yet to get solution. What is the solution now for .NEt 3.1, Im having same issue

@davidfowl
Copy link
Member

It's because the issue has turned into lots of different issues 😄. There's no repro for this issue and it depends on the environment and client speed (at least the original issue does). We know there's an issue somewhere but it's not clean how we make progress with the information provided here.

@kenyeung128
Copy link

kenyeung128 commented Aug 28, 2020

@Gopichandar I came across this and solved the issue as follow

  • refractor most methods using await + async Task method() pattern

seems the root cause in our case caused by all CPU being used for waiting tasks (mostly DB/IO related) to finish, and this causes the exception with BadHttpRequestException: Reading the request body timed out due to data arriving too slowly

@rasmus-s
Copy link

@davidfowl I think I was able to reproduce this issue over and over when I debugged my own code. I had made a small movie and Wireshark captures. But cannot share them here in this public github repo, what other more private channels can I reach out to you and the MS team?

The longer story:
Was making a few million PUT calls to our servers and around 0,08% of them failed also after multiple retries attempts – all had the same message on the server side: “Reading the request body timed out due to data arriving too slowly. See MinRequestBodyDataRate”. For sure the server was not idle during the many calls. But the day after the server was almost idle, and I was able to take the failed calls and re-execute them as a single request at the time. To my big surprise was it consistent that the payload failed and I still see same error message in our server logs.
Had also tried to increase the timeout, to give the server time for finding a free thread, but still same errors just after 10 sec, instead of the default 5 sec:
options.Limits.MinRequestBodyDataRate = new MinDataRate(120, TimeSpan.FromSeconds(10))

After hours of investigation I found out that there was two thing I could change to make the call succeed:

  1. Disconnect Ciso VPN.
    The VPN splits the payload into smaller TCP packages (here is it 5 packages). It’s always the last package that need to be re-transmitted. See image, the “.135” is my pc and “.57” is server. (when im disconnected from VPN it will send the payload in single TCP package). Was also connected to VPN when made the few millions calls.
    image

  2. Change the payload size with 1 byte – either add or remove one character to the json payload (off course on a field that is not analyzed on server side)
    The payload is still splited into 5 TCP packages - due to connected to VPN.

The payload is ~6Kb.
Its not only from my pc with VPN that we see this error, we see it from our end customers requests round the world without VPN. But this investigation its the closest I had been to debug, analyze and understand this error message, we see in our server logs.

I don’t think it’s a thread pool issue on the server side, due I am able to reproduce it over and over and a 1 byte change in the payload size can affect the result. And the other day the server handled way more requests.

I had seen error message “Reading the request body timed out due to data arriving too slowly. See MinRequestBodyDataRate” on both request to routes with simple aspnet core model Binding [FromBody], and routes that read body it self await Request.Body.ReadAsBytesAsync().

So the question is, what is it in aspnetcore that can cause this issue ? and how do I share more data with you guys?

@BrennanConroy BrennanConroy added affected-medium This issue impacts approximately half of our customers enhancement This issue represents an ask for new feature or an enhancement to an existing one severity-nice-to-have This label is used by an internal tool labels Oct 21, 2020 — with ASP.NET Core Issue Ranking
@norouly
Copy link

norouly commented Nov 9, 2020

It seems like I found a workaround until this gets fixed.
I had the same issues as @rasmus-s . It kept working until the TCP packages got split into separate ones, which (for me) was exactly at 1398 bytes. Adding the overhead, that's oddly close to the ~1500 bytes of an ethernet frame.
After doing some research I found a post where someone suggested to check the MTU size, which I did with ping ip -f -l xxx. Sure enough, -l 1473 returned the message Packet needs to be fragmented but DF set, so 1472 seems to be the limit where packages won't be split and where the exception does not occur.
To compare my result with something else, I hopped on a system where it worked fine and ran that command again - this time no Packet needs to be fragmented but DF set returned. I even used -l 65500 which is the maximum - no message.
It actually does seem like the MTU is the problem here.

I'll change the MTU of the first system where it doesn't work and keep this updated.

  • Looks like jumbo frames are the solution. I'll post an update once I've verified this.

@ghost
Copy link

ghost commented Jan 30, 2021

Thank you for contacting us. Due to a lack of activity on this discussion issue we're closing it in an effort to keep our backlog clean. If you believe there is a concern related to the ASP.NET Core framework, which hasn't been addressed yet, please file a new issue.

This issue will be locked after 30 more days of inactivity. If you still wish to discuss this subject after then, please create a new issue!

@ghost ghost closed this as completed Jan 30, 2021
@kevbry
Copy link

kevbry commented Feb 19, 2021

@davidfowl any chance to reopen this? Looks like rasmus-s found a repro

@davidfowl davidfowl reopened this Feb 19, 2021
@davidfowl
Copy link
Member

@halter73 check out the latest comments

@Leonardo-Ferreira
Copy link

I think its worth mentioning that sending/reading data slowly is a vector of DDoS attack

@halter73
Copy link
Member

halter73 commented Mar 24, 2021

I think its worth mentioning that sending/reading data slowly is a vector of DDoS attack

Correct. Specifically, this is called a slowloris attack and that's exactly what this min rate timeout and resulting exception are designed to mitigate.

@BrandonPotter
Copy link

I don’t think anyone is arguing that it’s not a sane default... but for applications that need to potentially receive data slowly, looking for a reliable way to disable this.

@davidfowl
Copy link
Member

There is, you can set the minimum data rate (the error itself has a pointer to the thing you need to change) https://docs.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.server.kestrel.core.kestrelserverlimits.minrequestbodydatarate?view=aspnetcore-5.0

@ghost
Copy link

ghost commented May 24, 2021

Thank you for contacting us. Due to a lack of activity on this discussion issue we're closing it in an effort to keep our backlog clean. If you believe there is a concern related to the ASP.NET Core framework, which hasn't been addressed yet, please file a new issue.

This issue will be locked after 30 more days of inactivity. If you still wish to discuss this subject after then, please create a new issue!

@ghost ghost closed this as completed May 24, 2021
@ghost ghost locked as resolved and limited conversation to collaborators Jun 23, 2021
@amcasey amcasey added area-networking Includes servers, yarp, json patch, bedrock, websockets, http client factory, and http abstractions and removed area-runtime labels Jun 2, 2023
This issue was closed.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
affected-medium This issue impacts approximately half of our customers area-networking Includes servers, yarp, json patch, bedrock, websockets, http client factory, and http abstractions enhancement This issue represents an ask for new feature or an enhancement to an existing one feature-kestrel severity-nice-to-have This label is used by an internal tool
Projects
None yet
Development

No branches or pull requests