Skip to content
This repository has been archived by the owner on Sep 14, 2020. It is now read-only.

Question: run-time registration of handlers? #378

Open
2 tasks done
nemethf opened this issue Jun 24, 2020 · 2 comments
Open
2 tasks done

Question: run-time registration of handlers? #378

nemethf opened this issue Jun 24, 2020 · 2 comments
Labels
question Further information is requested

Comments

@nemethf
Copy link

nemethf commented Jun 24, 2020

Question

I have a CRD with a selector field that defines a set of pods on which my operator should configure something. Additionally, it is possible to have two custom resources configuring different parts of the same pod. That is the intersection of the selectors can be non-empty.

Without using kopf, I would create a watch on pods for each custom resource and write an event handler for the watchers. The handler would somehow receive the event and the name of custom resource it belongs to. It seems kopf does not support this approach.

So can you, please, tell me how I can implement an operator for this problem with kopf? Thank you.

(I think #317 is somewhat similar, but not applicable here.)

Checklist

Keywords

I basically read all the titles of open issues, pull requests and the documentation from cover to cover.

@nemethf nemethf added the question Further information is requested label Jun 24, 2020
@nolar
Copy link
Contributor

nolar commented Jun 24, 2020

I am not sure that I got the idea right. So the answer may be wrong or irrelevant. But this is how I would approach it:

Since there are pods involved, there is a need for pod-handler. Since not all pods should be involved, we have to filter them. Since the criteria of filtering are quite sophisticated, I would use callbacks:

import kopf

def does_it_match(**_) -> bool:
    return True

@kopf.on.event('', 'v1', 'pods', when=does_it_match)
def pod_event(**_):
    pass

So, at this moment, all pods in the cluster/namespace will be intercepted. Now, we need to narrow the criteria. Since there is a selector in a CR, I would keep that global state of all selectors in memory, mapping to the original CRs they came from:

from typing import MutableMapping, Mapping

import kopf

SelectorKey = Tuple[str, str]  # (namespace, name)
SelectorLabels = Mapping[str, str]

SELECTORS: MutableMapping[SelectorKey, SelectorLabels] = {}


@kopf.on.create('zalando.org', 'v1', 'kopfexamples')
@kopf.on.resume('zalando.org', 'v1', 'kopfexamples')
@kopf.on.update('zalando.org', 'v1', 'kopfexamples')  # optionally
def cr_appears(namespace, name, spec, **_):
    key = (namespace, name)
    SELECTORS[key] = spec.get('selector', {})
  

@kopf.on.delete('zalando.org', 'v1', 'kopfexamples')
def cr_disappears(spec, **_):
    key = (namespace, name)
    try:
        del SELECTORS[key]
    except KeyError:
        pass

So, at this point, we would have data for filtering the pods. Now, I would actually filter in that function above:

def does_it_match(labels: Mapping[str, str], **_) -> bool:
    for (namespace, name), selector_labels in SELECTORS.items():
        if all(labels.get(key) == val for key, val in selector_labels.items()):
            return True
    return False

Now, the pods that do not match any known selector, will be silently ignored. Notice: they will get into the sight of the operator itself — in one and only one watch-stream — but will be filtered out in the earliest stages, with no logs produced (silently).


This is a difference here from your suggested approach: instead of having N watch-stream with labels in the URL (where N is the number of CRs with selectors), there will be one and only one watch-stream (and therefore TCP/HTTP/API connection), seeing all the pods, and just picking those of our interest, and ignoring others.

This will easy the API side, but will put some CPU load to the operator. The RAM footprint will be minimal, though not zero: every pod will spawn its own worker task (asyncio.Task), where the pod events will be routed to, and almost instantly ignored; but the tasks are objects too — on a cluster with thousands of pods this can be noticed.


As a continuation, using the same for + if, I would be able to detect to which CRs each individual pod corresponds (one or or even a few of them) — in the handler itself. And do something with that pod as the contextual object (in kwargs) and the detected CRs. Perhaps, the CRs' spec should be also preserved somewhere in the global state, so that we would know what to do specifically — after the matching CRs are identified by their selectors.


The downside here is that you have to keep some state in memory — for all the CRs, or all the pods, or all of something, depending on which of them you expect to be the least memory consuming.

I am not yet sure if it is possible to solve the cross-resource communication in any other way: when an event happens on a pod, no events happen on the CRs, so we have nothing to "join". You either scan your own in-memory state, or K8s's in-memory state via the K8s API on every pod event (costly!). But the up-to-date state must be somewhere there.


PS: The typing thing is fully optional, and is ignored at runtime. I just got a habit of using it for clarity.

@nemethf
Copy link
Author

nemethf commented Jun 25, 2020

I still think the dynamic handler registration is a bit more convenient, but you are right that it is less scalable than the approach you outlined.

However, with your approach, it might happen that a pod is created (and generates no events afterwards) before the corresponding CR appears in the system or vice versa, so I think the operator should store all the relevant info for both CRs and pods. This is doable but gets complicated when the CRD contains a namespace field in addition to the selector field.

At any rate, I'm going to ignore the namespace field in my operator and go with your idea. Thank you for enlightening me.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants