Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for soft RAM limit? #56

Open
pete-woods opened this issue Aug 3, 2022 · 6 comments
Open

Support for soft RAM limit? #56

pete-woods opened this issue Aug 3, 2022 · 6 comments

Comments

@pete-woods
Copy link

Hello 馃憢馃徎

Are there any plans to support the soft RAM limit introduced in Go 1.19?

(Either in this library or a new one)

@rabbbit
Copy link

rabbbit commented Aug 6, 2022

Hey,

Just acknowledging here that we saw the question. We're discussing this, but we don't have an exact plan yet.

@StevenACoffman
Copy link

This is appears to be a quick attempt at doing that: https://github.com/KimMachineGun/automemlimit
it used containerd under the hood, but I've not tried using it under various container runtimes, non-Linux OSes like darwin, etc.

@StevenACoffman
Copy link

StevenACoffman commented Oct 3, 2022

GOMAXPROCS and GOMEMLIMIT are environmental values that can help the application's Go Runtime to perform better.

In Kubernetes manifests, we could easily (and naively) set GOMEMLIMIT to be equal to the container limit like this:

apiVersion: v1
kind: Pod
metadata:
  name: go-k8s-memory
spec:
  containers:
  - name: go-k8s-memory
    image: ko://go-k8s-memory
    ports:
    - containerPort: 8080
    resources:
      limits:
        memory: 64Mi        // <--- limit on container memory
        cpu: "1"
      requests:
        memory: 62Mi
        cpu: "0.2"
    env:
    - name: GOMEMLIMIT
      valueFrom:            // <--- value taken from downwards API
        resourceFieldRef:
          resource: limits.memory

However, the Go documentation has this note:

In this case, a good rule of thumb is to leave an additional 5-10% of headroom to account for memory sources the Go runtime is unaware of.

Examples of these sources might include:

  • OS kernel memory held on behalf of the process
  • memory allocated by C code
  • memory mapped by syscall.Mmap

As a result, it would be best if we could correctly introspect the container's memory limit and set GOMEMLIMIT to 90-90% of the container's memory limit.

Also, I believe the Kubernetes API expects values to omit the B like in "62Mi" but GOMEMLIMIT expects "62MiB", and GOMEMLIMIT does not allow decimal values (GOMEMLIMIT="6.2MiB" will cause an error), so it's awkward without a library to smooth this over.

@StevenACoffman
Copy link

@rabbbit any update on your internal discussions?

@sywhang
Copy link
Contributor

sywhang commented Jan 12, 2023

hey folks, we've internally developed a system that automatically detects and sets GOMEMLIMIT and decided to not put that code on automaxprocs. The main rationale behind that decision is because this library, as its name suggests, detects max "procs", not memlimit.

We could open-source the internal library we're using, but we have a requirement of having to run something for over some period of time before we can open source them because we don't want to open source something that hasn't been battle-tested internally : )

Hope that makes sense, and we may reconsider open sourcing the internal library we're using in the future, but this is what we've decided for now. I won't close this issue as it's something we may still consider in the future.

Thanks.

@KyleSanderson
Copy link

hey folks, we've internally developed a system that automatically detects and sets GOMEMLIMIT and decided to not put that code on automaxprocs. The main rationale behind that decision is because this library, as its name suggests, detects max "procs", not memlimit.

We could open-source the internal library we're using, but we have a requirement of having to run something for over some period of time before we can open source them because we don't want to open source something that hasn't been battle-tested internally : )

Hope that makes sense, and we may reconsider open sourcing the internal library we're using in the future, but this is what we've decided for now. I won't close this issue as it's something we may still consider in the future.

Thanks.

Happy Holidays (and anniversary) 馃槈 how did it go?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants