Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[occm] - add annotation for custom octavia listener tags #2327

Open
m-kratochvil opened this issue Aug 18, 2023 · 20 comments
Open

[occm] - add annotation for custom octavia listener tags #2327

m-kratochvil opened this issue Aug 18, 2023 · 20 comments
Assignees
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature.

Comments

@m-kratochvil
Copy link

m-kratochvil commented Aug 18, 2023

/kind feature

Hi team,
sometimes we need to enable custom configuration for loadbalancer listener on the load balancing infrastructure backend,
like for example special handling of UDP datagrams for syslog, or special settings for HW acceleration.

We implement these special settings in Octavia using listener tags. Customer simply adds a tag to the listener and the provider driver will configure the necessary settings on the load balancing backend.

Could we make such custom load balancer listener tags configurable through loadbalancer service annotations?
This would greatly improve customer's experience, as they would not have to go through the extra step to add the tag after the listener is created.

FYI @kayrus

@k8s-ci-robot k8s-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Aug 18, 2023
@dulek
Copy link
Contributor

dulek commented Aug 21, 2023

It doesn't seem this link is public, my DNS has troubles finding it.

Anyway it feels like you're abusing tags and for functional modifications of the LB, you should use LB flavors [1]. Then we do allow setting LB flavor on a per-Service basis using the annotation [2].

[1] https://docs.openstack.org/octavia/latest/admin/flavors.html
[2] https://github.com/kubernetes/cloud-provider-openstack/blob/master/pkg/openstack/loadbalancer.go#L86

@notandy
Copy link
Contributor

notandy commented Aug 31, 2023

Hi @dulek, regardless of the reason stated earlier, we'd like to have the option to set Octavia Tags via service annotations on listener level. I think this is a valid request?

@dulek
Copy link
Contributor

dulek commented Aug 31, 2023

Hi @dulek, regardless of the reason stated earlier, we'd like to have the option to set Octavia Tags via service annotations on listener level. I think this is a valid request?

Frankly to me there are two problems with that request:

  • If we allow manipulating tags, then why not names and descriptions, enablement status and so on? At some point we're just allowing all the properties of every resource to be overridden through annotations.
  • You're requesting tags for listeners, so again once we allow this it balloons into allowing annotation-based tags in LBs, pools, members, healthmonitors, SGs and FIP toos, because why listeners would be special? I guess they're because the F5 driver uses listener tags to control some functionality.

I can see AWS allows custom tags on the LBs based on the annotation. I think that's an approach we could consider - a single annotation that would set the tag on of the Service LB resource. And this matches what tags are for - non-functional user data that can be utilized to identify resources. I'd still oppose making it more granular.

@BenjaminLudwigSAP
Copy link

for functional modifications of the LB, you should use LB flavors

Flavors are unsuitable. This is because:

  • A LB can have only one flavor. However, Octavia users might want to toggle several special behaviors simultaneously.
  • The flavor of a LB cannot be changed, the LB has to be deleted and recreated with a different flavor. It does not make sense to force users to do this each time they want to change one single special behavior/functionality about the LB.

Flavors are not intended for the use case of activating/deactivating special behavior to begin with. Flavors are there to represent different configurations that are so fundamental that it does not make sense to switch them on/off on an existing load balancer, e. g. extra much CPU or something.

why listeners would be special? I guess they're because the F5 driver uses listener tags to control some functionality.

That is the reason, correct.

If we allow manipulating tags, then why not names and descriptions, enablement status and so on
once we allow this it balloons into allowing annotation-based tags in LBs, pools, members, healthmonitors, SGs and FIP toos

No. This is not a slippery slope. There is no need to expose such Openstack concepts to K8s users. Instead:

an approach we could consider - a single annotation that would set the tag on of the Service LB resource

That was my understanding of the idea from the beginning. I agree that it does not make sense to expose Openstack concepts like tags etc. to K8s users.

  1. Either OCCM-created UDP LBs are supposed to always balance individual packets, in which case OCCM should always add the special UDP tag to the UDP listener, invisible to the K8s user (no annotation needed)
  2. or the K8s user needs to decide, in which case OCCM should expose the possibility to switch between stateful/stateless UDP to the user as a binary flag, i. e. an annotation.

@m-kratochvil Is there even any argument to be made for case 2?

@m-kratochvil
Copy link
Author

or the K8s user needs to decide, in which case OCCM should expose the possibility to switch between stateful/stateless UDP to the user as a binary flag, i. e. an annotation.

@m-kratochvil Is there even any argument to be made for case 2?

The stateless/stateful UDP is just one use-case for the annotation. And IMO yes, there is an argument, it is about resource preservation. Stateless UDP requires more CPU time on the host device and should not be the default behavior for UDP listeners, it should be selected by user based on a set of documented, recommended use-cases, e.g. syslog.

@kayrus
Copy link
Contributor

kayrus commented Sep 26, 2023

AWS has a tagging controller that marks AWS resources that are part of a Kubernetes (k8s) cluster. OCCM doesn't have this functionality, but some Kubernetes orchestrators already use metadata for compute instances (e.g., Gardener and Rancher use Nova metadata).

The only similar behavior I see in OCCM is with CSI drivers for Manila/Cinder, as they also utilize metadata:

$ grep -r ' = .*.csi.openstack.org/cluster"'
pkg/csi/manila/controllerserver.go: const clusterMetadataKey = "manila.csi.openstack.org/cluster"
pkg/csi/cinder/controllerserver.go: cinderCSIClusterIDKey = "cinder.csi.openstack.org/cluster"

In these cases, the value represents a cluster ID, which is set using --cluster-id=%clusterID% in Manila CSI and --cluster in Cinder CSI.

Here is a brief overview of tag/metadata support in the OpenStack API:

tags metadata
nova X (2.26) X
octavia X (starting 2.5)
cinder X
manila X

Unlike AWS tags, OpenStack tags are represented as lists, and the OpenStack equivalent to AWS key/value tags is metadata. Unfortunately, AWS load balancer tag code only exists in the legacy v1 code and has not been backported to v2 code. So, it remains a question whether we can implement tags and which resources should support them.

The only widespread use case I can imagine for this feature is filtering OpenStack resources used in a Kubernetes cluster. We can define a default tag, e.g., openstack.org/cluster=%cluster_id%, and additional tags that can be assigned to loadbalancers, listeners, pools, members, and health checks.

CSI drivers can also get a custom key/value metadata support. E.g. a metadata with namespace=XXX, owner=USERNAME can be added to a PV.

@dulek dulek added the good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. label Oct 13, 2023
@yorubad-dev
Copy link
Contributor

/assign

@jacksgt
Copy link
Contributor

jacksgt commented Oct 18, 2023

Hi folks,

we're interested in a similar use case: we'd like to set custom tags on the LB itself.

Currently, OCCM already sets the unique LB name as a tag:

$ openstack loadbalancer show $ID
| tags                | kube_service_${CLUSTER_NAME}_${NAMESPACE}_${SERVICE} 

We'd like to set additional tags because our OpenStack deployment implements some custom functionality based on these (enabling IPv6 support, security rules, responsible user etc.).

| tags                | kube_service_${CLUSTER_NAME}_${NAMESPACE}_${SERVICE} 
|                     | domain-alias=foobar 
|                     | user=me 

I can imagine that this can be handled with a custom annotation on the service which gets read by OCCM, e.g.:

kind: Service
apiVersion: v1
metadata:
  name: my-lb
  annotations:
    loadbalancer.openstack.org/load-balancer-tags: |
      domain-alias=foobar
      user=me

@dulek
Copy link
Contributor

dulek commented Nov 8, 2023

Whoa, I missed answering this one, sorry about that.

for functional modifications of the LB, you should use LB flavors

Flavors are unsuitable. This is because:

  • A LB can have only one flavor. However, Octavia users might want to toggle several special behaviors simultaneously.

I think you can have flavors with different combinations of settings, but sure.

  • The flavor of a LB cannot be changed, the LB has to be deleted and recreated with a different flavor. It does not make sense to force users to do this each time they want to change one single special behavior/functionality about the LB.

CPO could easily recreate the LB if flavor is changed, no big deal here. This reminds me that we don't do that when provider changes, where I believe we should.

Flavors are not intended for the use case of activating/deactivating special behavior to begin with. Flavors are there to represent different configurations that are so fundamental that it does not make sense to switch them on/off on an existing load balancer, e. g. extra much CPU or something.

why listeners would be special? I guess they're because the F5 driver uses listener tags to control some functionality.

That is the reason, correct.

If we allow manipulating tags, then why not names and descriptions, enablement status and so on
once we allow this it balloons into allowing annotation-based tags in LBs, pools, members, healthmonitors, SGs and FIP toos

No. This is not a slippery slope. There is no need to expose such Openstack concepts to K8s users. Instead:

an approach we could consider - a single annotation that would set the tag on of the Service LB resource

Cool, I think current patch being developed does so. We already see a different use case for this in OpenShift.

That was my understanding of the idea from the beginning. I agree that it does not make sense to expose Openstack concepts like tags etc. to K8s users.

  1. Either OCCM-created UDP LBs are supposed to always balance individual packets, in which case OCCM should always add the special UDP tag to the UDP listener, invisible to the K8s user (no annotation needed)
  2. or the K8s user needs to decide, in which case OCCM should expose the possibility to switch between stateful/stateless UDP to the user as a binary flag, i. e. an annotation.

There's one argument against case 1 - it's a setting specific to one Octavia provider that's even an external one, outside of OpenStack umbrella. I don't see a provider-specific option making it into the CPO code, we should keep it generic.

@m-kratochvil Is there even any argument to be made for case 2?

Alright, I guess we're on the same page. Please assist with reviews on #2439.

@BenjaminLudwigSAP, @m-kratochvil: There's one more problem that has to be solved here - how do we do tags updates when annotation is updated? Do we just deliberately remove all the tags that user could have set on the resources using OpenStack API? Or do we keep a list of tags currently applied in a second annotation (that user can tamper with too)?

@m-kratochvil
Copy link
Author

There's one more problem that has to be solved here - how do we do tags updates when annotation is updated? Do we just deliberately remove all the tags that user could have set on the resources using OpenStack API? Or do we keep a list of tags currently applied in a second annotation (that user can tamper with too)?

It's a good question. My first thought is that it is a source of truth matter. Since the loadbalancer is created out of k8s, the source of truth for its configuration should lie there and it should be well understood that if I change something directly via Openstack, it may be overwritten by OCCM. So if what I said makes sense, then all tags should be replaced by the ones set in the annotation.
I'm not clear on the second option you mentioned, tbh..

@dulek
Copy link
Contributor

dulek commented Nov 10, 2023

There's one more problem that has to be solved here - how do we do tags updates when annotation is updated? Do we just deliberately remove all the tags that user could have set on the resources using OpenStack API? Or do we keep a list of tags currently applied in a second annotation (that user can tamper with too)?

It's a good question. My first thought is that it is a source of truth matter. Since the loadbalancer is created out of k8s, the source of truth for its configuration should lie there and it should be well understood that if I change something directly via Openstack, it may be overwritten by OCCM. So if what I said makes sense, then all tags should be replaced by the ones set in the annotation. I'm not clear on the second option you mentioned, tbh..

Honestly I'd be cool with just replacing everything with OCCM's point of view, but it'd break how you do things now - that is you're adding the tags to the listeners manually using the Octavia API. Are you okay with this?

Second idea is a bit more complicated - we keep a separate annotation wirth list of the tags we've applied last time. Then when regular annotation is modified we can compare both lists and make sure to remove the tags that were previously added, while keeping the ones applied manually by the user.

All this is most likely super prone to race conditions anyway.

@m-kratochvil
Copy link
Author

Honestly I'd be cool with just replacing everything with OCCM's point of view, but it'd break how you do things now - that is you're adding the tags to the listeners manually using the Octavia API. Are you okay with this?

I am good with that, but I'd still need an opinion from @notandy and @BenjaminLudwigSAP
Our users can still apply listener tags via Octavia API for the loadbalancers not created via OCCM. For the ones created via OCCM the source of truth - for everything configurable via annotations - should be the annotations. And as you mentioned, it's the less complicated approach which is a benefit too.

@notandy
Copy link
Contributor

notandy commented Nov 13, 2023

I would also go with the easier non-race condition solution: If OCCM likes to manage the tags, it should be the solely entity having control over it. We will figure out a safe migration path for our customers - or as @m-kratochvil outlined the listener tags can still take care of any functionality.

@vladorf
Copy link

vladorf commented Feb 15, 2024

we're interested in a similar use case: we'd like to set custom tags on the LB itself.

We have use-case for custom tags too, though a bit different one.

We would like to receive metadata about kubernetes service (e.g. name of service, namespace, cluster) and process them in our custom provider. Currently we parse them out of LoadBalancer description, but that's not optimal.

What do you think about tags being a general metadata passing vehicle? Meaning OCCM would support configurable tags in its config file, something along lines

# String in quotes is tag's name, block inspired by LoadBalancerClass configuration

[Tag "kubernetes-namespace"]  
# Resource to tag
# Values: LoadBalancer | Listener | Pool | ...
resource = "LoadBalancer"
# How to interpret value
# Values: jsonpath | constant
type = "jsonpath"
# Tags data
# Type: string
value = ".metadata.namespace"

Does that cover your use-cases @jacksgt @m-kratochvil? Does generalization of tag handling sound good to you @dulek?

If list of tags in annotation is hard requirement, I suppose configuration format could be reworked...

@dulek
Copy link
Contributor

dulek commented Feb 16, 2024

we're interested in a similar use case: we'd like to set custom tags on the LB itself.

We have use-case for custom tags too, though a bit different one.

We would like to receive metadata about kubernetes service (e.g. name of service, namespace, cluster) and process them in our custom provider. Currently we parse them out of LoadBalancer description, but that's not optimal.

What do you think about tags being a general metadata passing vehicle? Meaning OCCM would support configurable tags in its config file, something along lines

# String in quotes is tag's name, block inspired by LoadBalancerClass configuration

[Tag "kubernetes-namespace"]  
# Resource to tag
# Values: LoadBalancer | Listener | Pool | ...
resource = "LoadBalancer"
# How to interpret value
# Values: jsonpath | constant
type = "jsonpath"
# Tags data
# Type: string
value = ".metadata.namespace"

Does that cover your use-cases @jacksgt @m-kratochvil? Does generalization of tag handling sound good to you @dulek?

If list of tags in annotation is hard requirement, I suppose configuration format could be reworked...

Oh, this sounds interesting. We would probably need some basic default tag configuration. In general I believe putting Service information into tags makes a lot of sense. Currently it's very hard to map a loadbalancer in Octavia to a Service in K8s, because name is hard to parse and is truncated to 255 characters.

@m-kratochvil
Copy link
Author

@vladorf If, in the end, it would allow the end-use to set arbitrary tag on Octavia listener, then yes, it covers our use-case.
Can you explain what exactly the end-user, that basically just has k8s cluster, would have to do to achieve that? I'm not clear on the OCCM "config file" involvement, and the possibility to make changes there. Would it mean that an OCCM admin would basically pre-configure all supported tags in that config file for the end-users, publish a list of those supported tags, and the end-users would simply set the annotation? I hope I'm not talking non-sense...

@vladorf
Copy link

vladorf commented Feb 29, 2024

I'm not clear on the OCCM "config file" involvement, and the possibility to make changes there.

I think you are already using config file, even if just for openstack API credentials. Probably in form of volumeMounted Secret, like here

So making changes to config would mean to edit relevant Secret resource.

Would it mean that an OCCM admin would basically pre-configure all supported tags in that config file for the end-users, publish a list of those supported tags, and the end-users would simply set the annotation? I hope I'm not talking non-sense...

No non-sense at all, that's precisely the plan.

@jacksgt
Copy link
Contributor

jacksgt commented Feb 29, 2024

@vladorf Thanks for the proposal. It seems like it might be a bit complex to implement, but I might be wrong about that.

Given your suggestion, I could use the following config:

[Tag "dns-alias"]  
resource = "LoadBalancer"
type = "jsonpath"
value = ".metadata.annotations.dns-alias"

and the have two services:

apiVersion: v1
kind: Service
metadata:
  annotations:
    dns-alias: my-lb-1.example.com
  name: router-apps-lb-1
spec:
  type: LoadBalancer
apiVersion: v1
kind: Service
metadata:
  annotations:
    dns-alias: my-lb-2.example.com
  name: router-apps-lb-2
spec:
  type: LoadBalancer

which would result in two Octavia Loadbalancers being created with the tags "dns-alias=my-lb-1.example.com" and "dns-alias=my-lb-2.example.com", correct?

@dulek
Copy link
Contributor

dulek commented Feb 29, 2024

Just for reference, the implementation at #2439 is stalled and I don't think @KingDaemonX will pick it up, so if anyone wants to work on this, the way to start would be to address my remarks on that PR.

@vladorf
Copy link

vladorf commented Mar 1, 2024

@vladorf Thanks for the proposal. It seems like it might be a bit complex to implement, but I might be wrong about that.

Given your suggestion, I could use the following config:

[Tag "dns-alias"]  
resource = "LoadBalancer"
type = "jsonpath"
value = ".metadata.annotations.dns-alias"

and the have two services:

apiVersion: v1
kind: Service
metadata:
  annotations:
    dns-alias: my-lb-1.example.com
  name: router-apps-lb-1
spec:
  type: LoadBalancer
apiVersion: v1
kind: Service
metadata:
  annotations:
    dns-alias: my-lb-2.example.com
  name: router-apps-lb-2
spec:
  type: LoadBalancer

which would result in two Octavia Loadbalancers being created with the tags "dns-alias=my-lb-1.example.com" and "dns-alias=my-lb-2.example.com", correct?

correct, that's the plan

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
9 participants