New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[DM-40477] Shared tap chart #2443
Conversation
07ed4ee
to
2770200
Compare
480805d
to
5b1eaa5
Compare
5b5a22c
to
4634f3d
Compare
Well I've tried a bunch of ways to override it, but this seems to be a known and persistent problem in helm that the merging is always kind of busted for these cases. Here's a bug I found that says it's still going on "Still exists in 3.11.1 Lucky. For the record I've tried resources: {} and resources: null. |
4634f3d
to
4789c78
Compare
Also for anyone interested in the helm stuff, yes it's a subchart with resources and I'm running: version.BuildInfo{Version:"v3.12.2", GitCommit:"1e210a2c8cc5117d1055bfaa5d40f51bbc2e345e", GitTreeState:"clean", GoVersion:"go1.20.5"} |
Yeah, unfortunately I think you may have to explicitly override the requests and set them to something lower. I wonder if 0 does what you want. This was probably why we had the resources in the per-environment values files before, although I agree with you that it involves less repetition to have them in one place. |
0 is what I'm trying now |
Let's just start by taking the tap chart and making a copy here. Check it in so that I can keep track of the changes using git and diff and such.
Let's try and combine the qserv mock and the pg mock.
This is because we can't really use appversion, since the appversion will be different between the postgres image and the qserv image, and will move independently.
A lot of shared values between livetap and ssotap, so let's try to make it similar.
With the changes in release 2.0.0 of the lsst-tap-service pod that supports qserv, we can share more of the uws and tap schema configuration. This all diverged upstream so let's bring it back together.
Maybe the minikube timeout is that the resources are too high so therefore it isn't scheduling the pods, and so they don't come up. Also get rid of a couple of values-minikube.yaml files where they aren't used. Having them there makes it confusing to think that they are.
1a3ef23
to
1eebdb0
Compare
1eebdb0
to
f29094d
Compare
looks like helm/helm#12162 might fix this when it gets rolled out |
No description provided.