-
Notifications
You must be signed in to change notification settings - Fork 88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add KEDA HPA TriggerAuthentication and postgresql ScaledObject. #2384
base: develop
Are you sure you want to change the base?
Add KEDA HPA TriggerAuthentication and postgresql ScaledObject. #2384
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should also set a memory request for the conda store worker deployment (I'd suggest a minimum of 1GiB), and allow the general node group to scale above 1 (e.g. set max_nodes default value to 5) by default for each provider e.g.
"general": AzureNodeGroup(instance="Standard_D8_v3", min_nodes=1, max_nodes=1), |
Otherwise, if we try to build 100 conda envs simultaneously, we'll get 100 conda worker pods. They will all be scheduled on a single general node (assuming only 1 is running). The general node won't even try to scale up (b/c the conda store worker doesn't have any resource requests set), but the general node won't have enough memory to solve all the envs at the same time and many will fail.
We should also make this part of the next Nebari release to tell existing deployments that it's recommended that they increase the max_nodes setting of the nebari-config if it is set explicitly to 4 or less.
It also might be a good idea to set a maximum number of conda store workers as a safe guard (even if the max number is high). E.g. If 100 envs are all scheduled at once, maybe we should only start a maximum of 50 workers. (In the KEDA sql command maybe take the min of the number of queued/building envs and 50 for example). Ideally that max would be configurable in the nebari config under the conda_store section.
Are there conda env solves that require more than 1 GB of ram? I remember early on we had to keep bumping up the ram on the conda-store main node. |
Yes, there are certainly conda solves (most?) that use more than 1 GB of RAM. I don't think there's a way to estimate beforhand how much RAM a conda solve will need. Setting the memory resource request doesn't limit the pod from using more RAM if available on the node, but it would put a limit on the number of conda store workers scheduled on the same node. |
Default as per the documentation - link is |
I have tried to create 100 conda environments for testing, and not all of them transition to a completed state. Some 24 get stuck in a queued state with the following error.
@dcmcand In this case, should we roll this out based on the manual testing we have done so far? @Adam-D-Lewis Are you happy with the other changes? |
… in too many requests errors in workers.
@pt247 looks like we still need this part. This could be one of the reasons the conda store workers are failing.
|
|
Remove cypress
I had to comment out the Cypress tests to get the tests passing. |
Reference Issues or PRs
Fixes #2284
What does this implement/fix?
Adds KEDA HPA TriggerAuthentication and PostgreSQL ScaledObject.
In the current
develop
, one pod forconda-store-worker
is always running. With this PR, theconda-store-worker
pods will scale down to zero at the start, scale up with the number of pending/queued builds, and scale down to zero when builds are complete.Put a
x
in the boxes that applyTesting
Any other comments?
KEDA
andKEDA.ScaledObject
parameters if needed.