Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

High memory usage #149

Open
robb-j opened this issue Jan 14, 2023 · 8 comments
Open

High memory usage #149

robb-j opened this issue Jan 14, 2023 · 8 comments
Labels
bug Something isn't working

Comments

@robb-j
Copy link

robb-j commented Jan 14, 2023

Your environment

Operator Version: 1.5.7

Connect Server Version: 1.5.7?

Kubernetes Version: v1.23.14

What happened?

The memory usage of my onepassword-connect pod was 1097Mi which seems very high, is there a normal amount of memory this container should stay around?

What did you expect to happen?

I thought the container would need a lot less memory

Steps to reproduce

  1. The container had been running for 38 days, after restarting the container it was using around 30Mi of memory.
    This is on a really small cluster with only 8 OnePasswordItem resources in it.
  2. The memory seems to go up a few Mi every few minute

Notes & Logs

before restarting
before

after restarting
after

my values.yml passed to helm:

operator:
  create: true
  watchNamespace: [r0b-system,games,default,tools,hyem-tech]
@robb-j robb-j added the bug Something isn't working label Jan 14, 2023
@mhixon4479
Copy link

Seeing similar here. The pod itself isn't being actively used although memory seems to leak until it's OOM killed.

Screen Shot 2023-01-25 at 9 07 20 AM

@katherine-black
Copy link

Its a shame there are not default values included for the resources requests/limits in the helm chart.

@gladiatr72
Copy link

gladiatr72 commented Jan 4, 2024

hrm. has this been resolved with newer versions? I set up a goldilocks monitor for it yesterday and am getting this as a resources recommendation:

resources:
  requests:
    cpu: 163m
    memory: 2282M
  limits:
    cpu: 163m
    memory: 2282M

The last couple months: (I recently inherited this cluster)

1password-controller--memory

The variations in the graph are indicative of this sort of status message w/in the dead pod's skeleton:

Message:      The node was low on resource: memory. Container connect-api was using 1576284Ki, which exceeds its request of 0. Container connect-sync was using 831820Ki, which exceeds its request of 0.

[...]

  connect-sync:
    Container ID:
    Image:          1password/connect-sync:1.5.6
    Image ID:
    Port:           <none>
    Host Port:      <none>
    State:          Terminated
      Reason:       ContainerStatusUnknown
      Message:      The container could not be located when the pod was terminated
      Exit Code:    137

[...]

@JaniszM
Copy link

JaniszM commented Feb 8, 2024

👍 , same here. My pod reaches 1GB in two days.

And finally:

The node was low on resource: memory. Threshold quantity: 100Mi, available: 85788Ki. Container connect-api was using 1671000Ki, request is 0, has larger consumption of memory. Container connect-sync was using 926732Ki, request is 0, has larger consumption of memory. 

@gladiatr72
Copy link

Bueller...

Bueller...

(sigh)

Dear 1password:

Hello. You might remember me from January 4th of this year. (I think I tripped over @robb-j's corpse in the parking lot, but, no matter, I'm here now). This isn't an open-source project in so much as anyone using it is a paying customer or works for a paying customer. If this operator should be considered abandoned, please have the courtesy of letting us know so we can make other arrangements.

Thanks

@mhixon4479
Copy link

Second the motion.

@samirahafezi
Copy link

Hey @gladiatr72!

Samira here from 1Password 👋🏻 Thanks for bringing this to our attention. We do keep a close eye on all of our repos and this issue in particular hasn't been forgotten. Memory issues tend to be tricky and we've been working on our end to try to figure this one out. Apologies on not keeping the issue updated, but it has not been forgotten.

To alleviate your concerns about this repo being abandoned, I want to stress that this is not the case. We are actively working on it and in fact, we released a new version last month that addressed a few bugs. Here are the release notes on that particular release.

We'll continue making updates to this repo over time and will keep the issue updated with our progress on this memory issue.

Thanks!

@gladiatr72
Copy link

I'm starting to feel a bit neglected again...

@naziba321 naziba321 self-assigned this May 3, 2024
@naziba321 naziba321 removed their assignment May 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

7 participants