You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Essentially what happens is that running the stream executors with in-cluster kubeconfig context leads to many broken pipes, which leads to the waiting time of 10 minutes before the shutdown goroutine ends. This waiting time keeps the data in memory alive. Not a big deal if it's a small number of broken pipes, but in my case the number is rising very quickly and easily can reach 2-3 GB of allocated memory within the 10-minute hold, thus considering it a memory leak of high significance (essentially a system-service pod is taking memory that the users of the cluster can't use).
The fix for this is that I replaced the waiting time from 10 minutes to 10 milliseconds instead, and the memory leak is "gone"(not truly gone, just very low and GCed quickly). However, I wonder what are the repercussions of changing https://github.com/moby/spdystream/blob/master/connection.go#L733 to milliseconds? Also, what is the point of waiting here in the first place?
The text was updated successfully, but these errors were encountered:
In the microservice I'm building, I've experienced a memory leak using the Kubernetes client-go package, which I traced back to https://github.com/moby/spdystream/blob/master/connection.go#L733.
The memory leak is really well documented in kubernetes/kubernetes#105830.
Essentially what happens is that running the stream executors with in-cluster kubeconfig context leads to many broken pipes, which leads to the waiting time of 10 minutes before the shutdown goroutine ends. This waiting time keeps the data in memory alive. Not a big deal if it's a small number of broken pipes, but in my case the number is rising very quickly and easily can reach 2-3 GB of allocated memory within the 10-minute hold, thus considering it a memory leak of high significance (essentially a system-service pod is taking memory that the users of the cluster can't use).
The fix for this is that I replaced the waiting time from 10 minutes to 10 milliseconds instead, and the memory leak is "gone"(not truly gone, just very low and GCed quickly). However, I wonder what are the repercussions of changing https://github.com/moby/spdystream/blob/master/connection.go#L733 to milliseconds? Also, what is the point of waiting here in the first place?
The text was updated successfully, but these errors were encountered: