-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Old kubectl crashes when v1alpha1 api is replaced with v1beta1. #35791
Comments
cc @kubernetes/sig-api-machinery |
cc: @caesarxuchao |
My money is on the TPR discovery code that is trying to identify TPR versions and hacks them into the list of registered APIs. |
I guess it's not on HEAD? Do you have the PR#? |
My PR #35731 has similar failure but different symptom. I created a GCE cluster locally. From addon-manager container log, I saw similar error:
|
Ah, I was using new kubectl |
@janetkuo was using the newly built kubectl so we didn't see the problem at first. Addon-manager is using 1.4.4 kubectl, so we observed problem later. |
Hmm, if the problem is the old versions of kubectl unable to dealing with new REST endpoints, then the problem is hard. I'll take a look. |
I thought the TPR code was inert by default and required a flag to kick in |
It seems the flag is enabled for most commands: https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/cmd/get.go#L122 |
We need to support 1 minor version skew for kubectl, so we have to workaround in the server. We can hide those new APIVersions in the discovery API from old kubectl. I'm going to send a PR. In the mean time, @kubernetes/sig-api-machinery, if someone can come up with a better idea. |
kubectl bug is fixed in |
#35840 should fix the problem. |
The issue was "fixed" because the "unable to register" error was ignored (not returned, just logged), see: |
that line in factory.go doesn't seem like a fix... it seems like it's leaving the client restmapper in an unknown state, which seems bad to me |
Yeah it's not really a fix |
Looks like kubectl should try to register the version to the existing group. IMO this isn't a blocker for #35840. |
@janetkuo @caesarxuchao can you make sure there's a 1.5 issue tracking the failure to register the new versions into existing groups correctly, or we'll have a related issue in the future |
Filed #36007 to track this |
Automatic merge from submit-queue Hide groups with new versions from old kubectl Fix #35791 **What caused the bug?** In 1.5, we are going to graduate Policy and Apps to beta. Old version kubectl doesn't has the new versions built-in, its TRP dynamic discover thinks Policy/v1beta1 is a TPR, and tried to register it in kubectl's scheme. The registration failed because Policy group already exist, because kubectl had registered Policy.v1alpha1. **How does this PR fix the bug?** This PR let the API server hides Policy and Apps from old version kubectl, so TPR discovery won't see them. Old version kubectl doesn't know about Policy/v1beta1 or Apps/v1beta1, and v1alpha1 will be removed, so old version kubectl won't work for Policy or Apps anyway, so this PR does not cause any function loss. @kubernetes/sig-api-machinery @liggitt @smarterclayton @deads2k @janetkuo @mwielgus
Is this expected? Does it mean that we cannot drop and replace alpha api without waiting for the new gcloud version?
The text was updated successfully, but these errors were encountered: