Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lambda memory consumption issues #336

Closed
ljacobsson opened this issue Oct 15, 2018 · 8 comments
Closed

Lambda memory consumption issues #336

ljacobsson opened this issue Oct 15, 2018 · 8 comments
Labels
bug This issue is a bug. closed-for-staleness module/lambda-client-lib response-requested Waiting on additional info and feedback. Will move to close soon in 7 days.

Comments

@ljacobsson
Copy link

Recently we have been experiencing an increase in constantly growing memory usage of certain functions, resulting in Process exited before completing request and a new lambda instance.

We first noticed this on 2018-09-11 at 12:43:06. Shortly before that we had made a deployment changing the runtime from dotnetcore2.0 to 2.1. No other significant changes were made to the code that could cause a memory leak and it was running without errors before that deployment.

I've reproduced the error with the following code:

public class DynamoDBTrigger
{
    private IAmazonDynamoDB _ddbClient;

    public DynamoDBTrigger()
    {
        _ddbClient = new AmazonDynamoDBClient();
    }

    [LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]
    public void Process(DynamoDBEvent ev, ILambdaContext context)
    {
        foreach (var record in ev.Records)
        {
            var item = Convert(record.Dynamodb.NewImage);
            context.Logger.Log(JsonConvert.SerializeObject(item));                
        }
    }

    private Item Convert(Dictionary<string, AttributeValue> attributeMap)
    {
        using (var context = new DynamoDBContext(_ddbClient))
        {
            var doc = Document.FromAttributeMap(attributeMap);
            return context.FromDocument<Item>(doc, new DynamoDBOperationConfig { OverrideTableName = Environment.GetEnvironmentVariable("Table") });
        }
    }
}

I'm writing one row per second to the triggering tableand I'm seeing a memory increase of about 1MB per 3 invocations leading to it sitting at 128MB for a while before logging this:

START RequestId: 3e5acba1-36fd-4a3a-bf84-659c5488437b Version: $LATEST
END RequestId: 3e5acba1-36fd-4a3a-bf84-659c5488437b
REPORT RequestId: 3e5acba1-36fd-4a3a-bf84-659c5488437b	Duration: 2293.32 ms	Billed Duration: 2300 ms Memory Size: 128 MB	Max Memory Used: 128 MB	
RequestId: 3e5acba1-36fd-4a3a-bf84-659c5488437b Process exited before completing request

I have tried and same thing happens with a 256MB configuration and the increase seems linear.

I have observed a similar thing with an API Gateway Lambda proxy which uses ApiGatewayProxyFunction entrypoint. In that case the memory increases linearly until it reaches the limit where it stays for a while before ASP.NET seems to want to clear up some memory:
image
Highlighting three observations:

  1. high duration on invocation before and after garbage collection
  2. Max memory Used dropped from 128MB to 68MB
  3. Same log stream used, so lambda instance is not discarded

I feel that we haven't changed anything or dragged in any new packages. Could it be possible that the release of .NET Core 2.1.4 around the same time last week could have caused this change in behaviour?

@genifycom
Copy link

I have also noticed a repeatable pattern. When I perform a publish on a C# lambda and then run it, and look at its memory consumption, it has a value, say 74MB. When I simply republish it again, no change, no recompilation, it will then run at a lower memory consumption, say 66MB

I am also seeing projects that were fine are now exceeding the 128MB limit and the process terminating prematurely.

@twopointzero
Copy link

128MB can barely start the process let alone run it effectively. Having recently tested cold start performance for a relatively simple ASP.NET Core Lambda Proxy implementation, using every available memory limit, I can say with good confidence that the commenters before me are thrashing the GC throughout the request lifecycle and causing their own memory issues.

They will observe low single-digit-second cold starts and sub-to-low-millisecond response times if they increase their memory limit to a level, suitable for the memory needs of their application, that minimizes start and execute times without increasing costs per start/execute.

@rajatshuvro
Copy link

I have a lambda that has 3GB memory. But I also see the same issue. In the beginning, my process takes about 1.5G but gradually keeps increasing. Sometimes the same input consumes wildly different amounts of memory (leading to terminations). There is nothing random in my code that would cause the memory consumption to vary.

@ljacobsson
Copy link
Author

@twopointzero 128mb was ample for the lambda in my example before upgrading it to .NET core 2.0.

I agree that 128mb isn't enough for ASP.NET Lambda Proxies, but this doesn't make use of Microsoft.AspNetCore.App

@hunkeelin
Copy link

This has something to do with the container being warm even after the function complete. Instead of return "",nil. just panic it and it would solve the problem. I know it's a hack but it is how it is atm.

@ashishdhingra ashishdhingra added bug This issue is a bug. module/lambda-client-lib needs-triage This issue or PR still needs to be triaged. labels Aug 12, 2020
@NGL321 NGL321 removed the needs-triage This issue or PR still needs to be triaged. label Oct 20, 2020
@NGL321
Copy link
Contributor

NGL321 commented Oct 20, 2020

Is anyone still experiencing this issue on current versions of lambda?

@NGL321 NGL321 added the response-requested Waiting on additional info and feedback. Will move to close soon in 7 days. label Oct 20, 2020
@github-actions
Copy link
Contributor

github-actions bot commented Nov 4, 2020

This issue has not recieved a response in 2 weeks. If you want to keep this issue open, please just leave a comment below and auto-close will be canceled.

@github-actions github-actions bot added the closing-soon This issue will automatically close in 4 days unless further comments are made. label Nov 4, 2020
@github-actions github-actions bot added closed-for-staleness and removed closing-soon This issue will automatically close in 4 days unless further comments are made. labels Nov 12, 2020
@ozgurakcali
Copy link

ozgurakcali commented Oct 15, 2021

@NGL321 we are still experiencing the same issue. We are seeing this issue on a function we invoke every ~10 minutes. Looking like subsequent runs of the same code on same input causing higher memory usages, and in turn, when it exceeds function memory limit, task exits.

I've created the the lambda environment locally on docker using images hosted here: https://hub.docker.com/r/lambci/lambda/, and this problem does not happen there with the same input.

It looks like a memory leak at first, but its not actually a memory leak, I print process memory consumption at certain places of the function, and it always starts at the near-same low value, but after a number of invocations. starts to consume much higher memory as the function progresses.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug This issue is a bug. closed-for-staleness module/lambda-client-lib response-requested Waiting on additional info and feedback. Will move to close soon in 7 days.
Projects
None yet
Development

No branches or pull requests

8 participants