-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handle GroupDiscoveryFailedError in proxy.go (#5596) #6222
Handle GroupDiscoveryFailedError in proxy.go (#5596) #6222
Conversation
…work#5596) Similar to the fix in helm (helm/helm#6361), this fix allows GroupDiscoveryFailedError to not error out the process of managing apiserver resource types. Fixes operator-framework#5596 Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
Add a changelog fragment for the merge automation operator-framework#5596 Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
Hi maintainers! I think this fix should address the linked issue, but I'm still working on getting a valid repro case that will allow me to fully validate. I'll also plan to look into a unit test to exercise the corner case before moving to full review. |
I've verified that |
@gabe-l-hart Thanks for taking the time to make this contribution! I looked through the changes and it seems relatively straightforward. I would definitely like to find a way to add a test case for this PR before merging though to ensure the change is working as expected. Regarding the implementation of the test case itself, this comment on the Helm issue you linked makes me think as part of the test case we would need to install an api-service onto the test cluster and then intentionally orphan it somehow (I'm honestly not sure how to do this - not very familiar with aggregated apiservers). |
Thanks for the pointer @everettraven! I'll look into adding a test and see how far I get. I also don't know yet how to simulate an orphaned |
@gabe-l-hart I was doing some pretty rudimentary digging around and I think a good starting point for understanding how the apiservers work could be: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/ I also saw this: https://kubernetes.io/docs/reference/kubernetes-api/cluster-resources/api-service-v1/ - it looks to go over the APIService resource and I wonder if we could somehow use that to fake out the cluster we are using for testing that there is an apiserver running for a specific Just some more thoughts from doing some brief digging around about aggregated apiservers - I'm looking forward to seeing what you come up with (and feel free to ping me on here if you just want to bounce ideas around :)) |
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
Stale issues rot after 30d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle rotten |
Rotten issues close after 30d of inactivity. Reopen the issue by commenting /close |
@openshift-bot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Description of the change:
Similar to the fix in helm, this fix allows GroupDiscoveryFailedError to not error out the process of managing apiserver resource types.
Motivation for the change:
The proposed fix in #5596 requires that the cluster's state be fixed (upgrading / cleaning up the metrics server
apiserver
resource). While this is correct, it's not a fix that can be made by the author of anoperator
as it requires the user of the operator to fix their cluster. Fixing this on the client side (in theoperator
itself) will save headaches and support tickets in the future.Checklist
If the pull request includes user-facing changes, extra documentation is required:
changelog/fragments
(seechangelog/fragments/00-template.yaml
)website/content/en/docs