Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Old kubectl crashes when v1alpha1 api is replaced with v1beta1. #35791

Closed
mwielgus opened this issue Oct 28, 2016 · 22 comments · Fixed by #35840
Closed

Old kubectl crashes when v1alpha1 api is replaced with v1beta1. #35791

mwielgus opened this issue Oct 28, 2016 · 22 comments · Fixed by #35840
Assignees
Labels
area/kubectl priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery.

Comments

@mwielgus
Copy link
Contributor

mwielgus commented Oct 28, 2016

kubectl get nodes -v 8

[...]
I1028 16:23:17.613562   26999 round_trippers.go:296] GET https://104.197.107.182/apis/policy/v1beta1
I1028 16:23:17.613645   26999 round_trippers.go:303] Request Headers:
I1028 16:23:17.613693   26999 round_trippers.go:306]     Accept: application/json, */*
I1028 16:23:17.613811   26999 round_trippers.go:306]     User-Agent: kubectl/v1.4.1 (linux/amd64) kubernetes/33cf7b9
I1028 16:23:17.613867   26999 round_trippers.go:306]     Authorization: Bearer bXlnXhEFV9oHeklOcVLX2Ri4IqFWArGx
I1028 16:23:17.727550   26999 round_trippers.go:321] Response Status: 200 OK in 113 milliseconds
I1028 16:23:17.727614   26999 round_trippers.go:324] Response Headers:
I1028 16:23:17.727646   26999 round_trippers.go:327]     Content-Length: 256
I1028 16:23:17.727676   26999 round_trippers.go:327]     Content-Type: application/json
I1028 16:23:17.727706   26999 round_trippers.go:327]     Date: Fri, 28 Oct 2016 14:23:17 GMT
I1028 16:23:17.727881   26999 request.go:908] Response Body: {"kind":"APIResourceList","apiVersion":"v1","groupVersion":"policy/v1beta1","resources":[{"name":"poddisruptionbudgets","namespaced":true,"kind":"PodDisruptionBudget"},{"name":"poddisruptionbudgets/status","namespaced":true,"kind":"PodDisruptionBudget"}]}
F1028 16:23:17.728389   26999 helpers.go:119] error: group map[:0xc8203d0540 authentication.k8s.io:0xc8203d0620 authorization.k8s.io:0xc8203d0770 autoscaling:0xc8203d07e0 batch:0xc8203d09a0 certificates.k8s.io:0xc8203d0a10 policy:0xc820070310 federation:0xc8203d0070 storage.k8s.io:0xc8200703f0 componentconfig:0xc8203d0a80 extensions:0xc8200701c0 rbac.authorization.k8s.io:0xc820070380 apps:0xc8203d05b0] is already registered
kubectl version
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.1", GitCommit:"33cf7b9acbb2cb7c9c72a10d6636321fb180b159", GitTreeState:"clean", BuildDate:"2016-10-10T18:19:49Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5+", GitVersion:"v1.5.0-alpha.1.985+fe0a941025aa36", GitCommit:"fe0a941025aa36ea48d37b6bec2e73fdca88185f", GitTreeState:"clean", BuildDate:"2016-10-28T11:34:34Z", GoVersion:"go1.7.1", Compiler:"gc", Platform:"linux/amd64"}

Is this expected? Does it mean that we cannot drop and replace alpha api without waiting for the new gcloud version?

@mwielgus mwielgus added area/kubectl sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. labels Oct 28, 2016
@mwielgus
Copy link
Contributor Author

@bgrant0607
Copy link
Member

cc @kubernetes/sig-api-machinery

@mwielgus
Copy link
Contributor Author

cc: @caesarxuchao

@deads2k
Copy link
Contributor

deads2k commented Oct 28, 2016

My money is on the TPR discovery code that is trying to identify TPR versions and hacks them into the list of registered APIs.

@davidopp davidopp added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Oct 28, 2016
@janetkuo
Copy link
Member

I guess it's not on HEAD? Do you have the PR#?

@caesarxuchao
Copy link
Member

@janetkuo had a PR #35731 moving StatefulSet from v1alpha1 to v1beta1 and she didn't have the same problem. It's interesting why graduating PDB will have the problem

@janetkuo
Copy link
Member

janetkuo commented Oct 28, 2016

My PR #35731 has similar failure but different symptom. I created a GCE cluster locally. kubectl get nodes doesn't give me that error, but addon-manager isn't working properly.

From addon-manager container log, I saw similar error:

...
error: group map[federation:0xc820327810 authentication.k8s.io:0xc820327dc0 autoscaling:0xc820327f80 componentconfig:0xc8203da230 storage.k8s.io:0xc8203da460 policy:0xc8203da380 rbac.authorization.k8s.io:0xc8203da3f0 :0xc820327c
e0 apps:0xc820327d50 authorization.k8s.io:0xc820327f10 batch:0xc8203da150 certificates.k8s.io:0xc8203da1c0 extensions:0xc8203da310] is already registered
error: group map[federation:0xc820323810 autoscaling:0xc820323f80 componentconfig:0xc8203d8230 rbac.authorization.k8s.io:0xc8203d83f0 extensions:0xc8203d8310 policy:0xc8203d8380 :0xc820323ce0 apps:0xc820323d50 authentication.k8s
.io:0xc820323dc0 authorization.k8s.io:0xc820323f10 batch:0xc8203d8150 certificates.k8s.io:0xc8203d81c0 storage.k8s.io:0xc8203d8460] is already registered
error: group map[federation:0xc820325810 :0xc820325ce0 authentication.k8s.io:0xc820325dc0 autoscaling:0xc820325f80 batch:0xc8203d6150 policy:0xc8203d6380 apps:0xc820325d50 authorization.k8s.io:0xc820325f10 certificates.k8s.io:0x
c8203d61c0 componentconfig:0xc8203d6230 extensions:0xc8203d6310 rbac.authorization.k8s.io:0xc8203d63f0 storage.k8s.io:0xc8203d6460] is already registered
error: group map[certificates.k8s.io:0xc8203d61c0 componentconfig:0xc8203d6230 extensions:0xc8203d6310 policy:0xc8203d6380 federation:0xc820325810 :0xc820325ce0 authorization.k8s.io:0xc820325f10 batch:0xc8203d6150 rbac.authoriza
tion.k8s.io:0xc8203d63f0 storage.k8s.io:0xc8203d6460 apps:0xc820325d50 authentication.k8s.io:0xc820325dc0 autoscaling:0xc820325f80] is already registered
error: group map[rbac.authorization.k8s.io:0xc8203d83f0 apps:0xc820325d50 authorization.k8s.io:0xc820325f10 autoscaling:0xc820325f80 batch:0xc8203d8150 certificates.k8s.io:0xc8203d81c0 extensions:0xc8203d8310 policy:0xc8203d8380
 federation:0xc820325810 :0xc820325ce0 authentication.k8s.io:0xc820325dc0 componentconfig:0xc8203d8230 storage.k8s.io:0xc8203d8460] is already registered
...

@janetkuo
Copy link
Member

Ah, I was using new kubectl

@caesarxuchao
Copy link
Member

caesarxuchao commented Oct 28, 2016

@janetkuo was using the newly built kubectl so we didn't see the problem at first. Addon-manager is using 1.4.4 kubectl, so we observed problem later.

@caesarxuchao
Copy link
Member

Hmm, if the problem is the old versions of kubectl unable to dealing with new REST endpoints, then the problem is hard. I'll take a look.

@caesarxuchao
Copy link
Member

caesarxuchao commented Oct 28, 2016

@deads2k is correct. See this line.

@liggitt
Copy link
Member

liggitt commented Oct 28, 2016

I thought the TPR code was inert by default and required a flag to kick in

@caesarxuchao
Copy link
Member

caesarxuchao commented Oct 28, 2016

@caesarxuchao
Copy link
Member

caesarxuchao commented Oct 28, 2016

We need to support 1 minor version skew for kubectl, so we have to workaround in the server. We can hide those new APIVersions in the discovery API from old kubectl.

I'm going to send a PR. In the mean time, @kubernetes/sig-api-machinery, if someone can come up with a better idea.

@janetkuo
Copy link
Member

kubectl bug is fixed in 1.5.0-alpha.2, need to cherrypick the change to 1.4. @caesarxuchao is working on server side temp fix

@caesarxuchao
Copy link
Member

#35840 should fix the problem.

@janetkuo
Copy link
Member

janetkuo commented Nov 1, 2016

kubectl bug is fixed in 1.5.0-alpha.2, need to cherrypick the change to 1.4.

The issue was "fixed" because the "unable to register" error was ignored (not returned, just logged), see:
https://github.com/kubernetes/kubernetes/blob/v1.5.0-alpha.2/pkg/kubectl/cmd/util/factory.go#L341

@liggitt
Copy link
Member

liggitt commented Nov 1, 2016

that line in factory.go doesn't seem like a fix... it seems like it's leaving the client restmapper in an unknown state, which seems bad to me

@janetkuo
Copy link
Member

janetkuo commented Nov 1, 2016

Yeah it's not really a fix

@caesarxuchao
Copy link
Member

Looks like kubectl should try to register the version to the existing group. IMO this isn't a blocker for #35840.

@liggitt
Copy link
Member

liggitt commented Nov 1, 2016

@janetkuo @caesarxuchao can you make sure there's a 1.5 issue tracking the failure to register the new versions into existing groups correctly, or we'll have a related issue in the future

@janetkuo
Copy link
Member

janetkuo commented Nov 1, 2016

@janetkuo @caesarxuchao can you make sure there's a 1.5 issue tracking the failure to register the new versions into existing groups correctly, or we'll have a related issue in the future

Filed #36007 to track this

k8s-github-robot pushed a commit that referenced this issue Nov 2, 2016
Automatic merge from submit-queue

Hide groups with new versions from old kubectl

Fix #35791

**What caused the bug?**

In 1.5, we are going to graduate Policy and Apps to beta. Old version kubectl doesn't has the new versions built-in, its TRP dynamic discover thinks Policy/v1beta1 is a TPR, and tried to register it in kubectl's scheme. The registration failed because Policy group already exist, because kubectl had registered Policy.v1alpha1.

**How does this PR fix the bug?**

This PR let the API server hides Policy and Apps from old version kubectl, so TPR discovery won't see them.

Old version kubectl doesn't know about Policy/v1beta1 or Apps/v1beta1, and v1alpha1 will be removed, so old version kubectl won't work for Policy or Apps anyway, so this PR does not cause any function loss.

@kubernetes/sig-api-machinery @liggitt @smarterclayton @deads2k @janetkuo @mwielgus
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kubectl priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants