Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Knative Installation Wipes Namespace Labels (Should Merge) #388

Open
awgneo opened this issue Dec 10, 2020 · 18 comments
Open

Knative Installation Wipes Namespace Labels (Should Merge) #388

awgneo opened this issue Dec 10, 2020 · 18 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. triage/accepted Issues which should be fixed (post-triage)

Comments

@awgneo
Copy link

awgneo commented Dec 10, 2020

Describe the bug
When installing Knative (serving) into a fresh namespace with labels defined, they are wiped out with the Knative release version number when the labels should be merged.

apiVersion: v1
kind: Namespace
metadata:
 name: knative-serving
 labels:
  istio-injection: enabled

becomes

apiVersion: v1
kind: Namespace
metadata:
 name: knative-serving
 labels:
  serving.knative.dev/release=v0.18.0

Expected behavior
The labels of the namespace should be merged.

apiVersion: v1
kind: Namespace
metadata:
 name: knative-serving
 labels:
  istio-injection: enabled
  serving.knative.dev/release=v0.18.0

To Reproduce
Create a namespace with labels and install a Knative operator instance.

Knative release version
v0.18.0 - v0.19.1

Additional context
None

@awgneo awgneo added the kind/bug Categorizes issue or PR as related to a bug. label Dec 10, 2020
@houshengbo
Copy link
Contributor

@jcrossley3 Could you take a look?

@jcrossley3
Copy link
Contributor

jcrossley3 commented Dec 11, 2020

@awgneo how was the original Namespace created? In particular, did it have a last-applied-configuration annotation? If your label was in that annotation, it will be deleted per https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/#merging-changes-to-map-fields

@alxndr42
Copy link

The namespace was created with kubectl apply. Is using kubectl create a safe workaround for this problem?

I also wonder if the Operator could add its label in a less destructive way, even if kubectl apply was used.

@houshengbo
Copy link
Contributor

Knative operator supports customized manifests, which means you can configure your knative yamls into ANYTHING, fitting your platform.
In the 0.18 and 0.19, we only support a complete replacement of all yamls with the field spec.manifests, but in the coming new release, we are able to support customized yamls appending to the existing manifests by the field spec.additionalManifests.
You can leverage this coming feature for your need to customize your resource.

@houshengbo
Copy link
Contributor

I used the manifestival lib to try creating the namespace knative-serving first with the label istio-injection":"enabled and the annotation knative.dev/example-checksum:1d830d9e. Then updating the namespace without the label istio-injection: enabled.

Here is the output when merging:

The unstructured.Unstructured to be applied is

&{map[apiVersion:v1 kind:Namespace metadata:map[labels:map[serving.knative.dev/release:v0.19.0] name:knative-serving]]}

The current unstructured.Unstructured in the cluster is

&{map[apiVersion:v1 kind:Namespace metadata:map[annotations:map[knative.dev/example-checksum:1d830d9e kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Namespace","metadata":{"annotations":{"knative.dev/example-checksum":"1d830d9e"},"labels":{"istio-injection":"enabled","serving.knative.dev/release":"v0.19.0"},"name":"knative-serving"}}

The original unstructured.Unstructured in last-applied-configuration is:

{"apiVersion":"v1","kind":"Namespace","metadata":{"annotations":{"knative.dev/example-checksum":"1d830d9e"},"labels":{"istio-injection":"enabled","serving.knative.dev/release":"v0.19.0"},"name":"knative-serving"}

After the update, the namespace is

apiVersion: v1
kind: Namespace
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Namespace","metadata":{"labels":{"serving.knative.dev/release":"v0.19.0"},"name":"knative-serving"}}
  creationTimestamp: "2020-12-16T21:16:13Z"
  labels:
    serving.knative.dev/release: v0.19.0

The existing labels and annotations are gone.

@alxndr42
Copy link

We're using Knative Operator 0.18, so I tested creating the namespace with kubectl create and the istio-injection label, then installing Knative Serving. This did not cause the istio-injection label to disappear, so it seems like a viable workaround.

(I also took a quick look at spec.manifests, but it doesn't seem to support file: URLs (unsupported protocol scheme \"file\"), which would be convenient. Unless I'm misunderstanding what this attribute does.)

@github-actions
Copy link

This issue is stale because it has been open for 90 days with no
activity. It will automatically close after 30 more days of
inactivity. Reopen the issue with /reopen. Mark the issue as
fresh by adding the comment /remove-lifecycle stale.

@github-actions github-actions bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 18, 2021
@maxbog
Copy link

maxbog commented Jun 3, 2022

/reopen

I'm having trouble understanding how this is the intended behavior.
This is the basic scenario when installing knative using kubectl apply.

IMO, the workaround with kubectl create is not a viable one, because it changes the approach of defining namspace from declarative to imperative.

Given the scenario from the bug - namespace definition with a label: how do I make sure that the operator does not remove the label set by kubectl apply?

@knative-prow
Copy link

knative-prow bot commented Jun 3, 2022

@maxbog: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

I'm having trouble understanding how this is the intended behavior.
This is the basic scenario when installing knative using kubectl apply.

IMO, the workaround with kubectl create is not a viable one, because it changes the approach of defining namspace from declarative to imperative.

Given the scenario from the bug - namespace definition with a label: how do I make sure that the operator does not remove the label set by kubectl apply?

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@eloyekunle
Copy link

/reopen

This issue is still not resolved.

@knative-prow
Copy link

knative-prow bot commented Aug 17, 2022

@eloyekunle: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

This issue is still not resolved.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@sugarman402
Copy link

/reopen

@knative-prow
Copy link

knative-prow bot commented Mar 29, 2023

@sugarman402: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@sugarman402
Copy link

/remove-lifecycle stale

@knative-prow knative-prow bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 29, 2023
@sugarman402
Copy link

The issue is still there, it would be nice to include some annotation which we can add to the namespace and in result the operator-webhook won't touch the existing labels on the namespace.

@pierDipi
Copy link
Member

/reopen
/remove-lifecycle stale
/triage accepted

@knative-prow knative-prow bot added the triage/accepted Issues which should be fixed (post-triage) label Mar 30, 2023
@knative-prow
Copy link

knative-prow bot commented Mar 30, 2023

@pierDipi: Reopened this issue.

In response to this:

/reopen
/remove-lifecycle stale
/triage accepted

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@knative-prow knative-prow bot reopened this Mar 30, 2023
@dprotaso
Copy link
Member

@houshengbo if the operator is installing stuff into a namespace should it remove that namespace from the manifests? Thus not overwritting it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. triage/accepted Issues which should be fixed (post-triage)
Projects
None yet
Development

No branches or pull requests

9 participants