Skip to content
This repository has been archived by the owner on Sep 14, 2020. It is now read-only.

Unprocessable Entity #309

Open
brutus333 opened this issue Feb 10, 2020 · 13 comments
Open

Unprocessable Entity #309

brutus333 opened this issue Feb 10, 2020 · 13 comments
Labels
bug Something isn't working

Comments

@brutus333
Copy link

Long story short

I've tried to qualify 0.25 based on existing tests built with KopfRunner context. These tests worked well with all versions from 0.21 to 0.24. However, using 0.25 will raise a framework error.

Description

One of the simplest tests creates a custom object, lets the operator create a pod based on custom object definition and delete the custom object (which by owner cascaded deletion deletes the pod too).

import kopf
from kopf.testing import KopfRunner
import os
import unittest
import time
import subprocess

KOPF_RUNNER_COMMAND = ['run', 'src/libvirt.py', '--namespace', 'default', '--standalone']
class MyTestCase(unittest.TestCase):

    def test_custom_object_creation_and_deletion(self):
        with KopfRunner(KOPF_RUNNER_COMMAND, timeout=30) as runner:
            # do something while the operator is running.
            subprocess.run("kubectl apply -f tests/libvirtds1.yaml", shell=True, check=True)
            time.sleep(5)  # give it some time to react and to sleep and to retry

            subprocess.run("kubectl delete -f tests/libvirtds1.yaml", shell=True, check=True)
            time.sleep(30)  # give it some time to react
        self.assertEqual(runner.exit_code,0)
        self.assertIs(runner.exception,None)
        self.assertIn('falling back to kubeconfig configuration', runner.stdout)
        self.assertIn('Starting to create pod on node', runner.stdout)
        self.assertIn('Running delete handler for pod', runner.stdout)
        self.assertIn('was deleted by k8s cascaded deletion of owner', runner.stdout)
pytest -x
==================================================================================== test session starts =====================================================================================
platform linux -- Python 3.7.5, pytest-5.3.5, py-1.8.1, pluggy-0.13.1
rootdir: /src, inifile: pytest.ini
plugins: asyncio-0.10.0
collected 6 items

tests/libvirt_test.py::MyTestCase::test_custom_object_creation_and_deletion
--------------------------------------------------------------------------------------- live log call ----------------------------------------------------------------------------------------
INFO     kopf.objects:libvirt.py:38 Starting libvirt operator
WARNING  kopf.objects:libvirt.py:44 Can't use in cluster configuration, falling back to kubeconfig configuration
WARNING  kopf.reactor.running:running.py:281 OS signals are ignored: running not in the main thread.
INFO     kopf.reactor.activities:activities.py:59 Initial authentication has been initiated.
INFO     kopf.activities.authentication:handling.py:571 Handler 'login_via_pykube' succeeded.
INFO     kopf.activities.authentication:handling.py:571 Handler 'login_via_client' succeeded.
INFO     kopf.reactor.activities:activities.py:68 Initial authentication has finished.
INFO     kopf.objects:libvirt.py:405 Looking after a daemonset with adoption labels: {'adopt-by': 'libvirt-ds'}
INFO     kopf.objects:libvirt.py:358 Node kind-worker does not have a pod. Creating one now.
INFO     kopf.objects:libvirt.py:174 Starting to create pod on node kind-worker
INFO     kopf.objects:handling.py:571 Handler 'create_libvirtds/kind-worker' succeeded.
INFO     kopf.objects:handling.py:571 Handler 'create_libvirtds' succeeded.
INFO     kopf.objects:handling.py:329 All handlers succeeded for creation.
INFO     kopf.objects:libvirt.py:468 Update handler called with: (('add', ('spec', 'template', 'spec', 'nodeSelector'), None, {'libvirt': 'yes'}), ('remove', ('spec', 'template', 'spec', 'af
finity'), {'nodeAffinity': {'requiredDuringSchedulingIgnoredDuringExecution': {'nodeSelectorTerms': [{'matchFields': [{'key': 'metadata.name', 'operator': 'In', 'values': ['kind-worker']}]}]
}}}, None), ('change', ('spec', 'template', 'spec', 'tolerations'), [{'operator': 'Exists'}, {'operator': 'Exists', 'effect': 'NoSchedule', 'key': 'node.kubernetes.io/disk-pressure'}, {'oper
ator': 'Exists', 'effect': 'NoSchedule', 'key': 'node.kubernetes.io/memory-pressure'}, {'operator': 'Exists', 'effect': 'NoSchedule', 'key': 'node.kubernetes.io/unschedulable'}, {'operator':
 'Exists', 'effect': 'NoSchedule', 'key': 'node.kubernetes.io/network-unavailable'}], [{'operator': 'Exists'}]), ('change', ('spec', 'template', 'spec', 'containers'), [{'image': 'nginx:1.8.
1', 'imagePullPolicy': 'IfNotPresent', 'name': 'nginx', 'ports': [{'containerPort': 80, 'protocol': 'TCP'}], 'resources': {}, 'terminationMessagePath': '/dev/termination-log', 'terminationMe
ssagePolicy': 'File'}, {'command': ['/bin/sleep', '36000'], 'image': 'busybox', 'imagePullPolicy': 'IfNotPresent', 'name': 'busybox', 'resources': {'limits': {'memory': '1.74Gi'}, 'requests'
: {'memory': '1.16Gi'}}, 'terminationMessagePath': '/dev/termination-log', 'terminationMessagePolicy': 'File'}], [{'image': 'nginx:1.8.1', 'imagePullPolicy': 'IfNotPresent', 'name': 'nginx',
 'ports': [{'containerPort': 80, 'protocol': 'TCP'}], 'resources': {}, 'terminationMessagePath': '/dev/termination-log', 'terminationMessagePolicy': 'File'}, {'command': ['/bin/sleep', '3600
0'], 'image': 'busybox', 'imagePullPolicy': 'IfNotPresent', 'name': 'busybox', 'resources': {}, 'terminationMessagePath': '/dev/termination-log', 'terminationMessagePolicy': 'File'}]))
INFO     kopf.objects:libvirt.py:251 Looking at pod my-libvirt-ds-rl25c on node kind-worker
INFO     kopf.objects:libvirt.py:262 Found matching pod my-libvirt-ds-rl25c on node kind-worker
INFO     kopf.objects:libvirt.py:263 Found pod with metadata: {'annotations': None, 'cluster_name': None, 'creation_timestamp': datetime.datetime(2020, 2, 10, 13, 2, 30, tzinfo=tzlocal()), '
deletion_grace_period_seconds': None, 'deletion_timestamp': None, 'finalizers': ['kopf.zalando.org/KopfFinalizerMarker'], 'generate_name': 'my-libvirt-ds-', 'generation': None, 'initializers
': None, 'labels': {'app': 'qemu', 'comp': 'libvirt', 'owner-object-type': 'libvirt-ds'}, 'managed_fields': None, 'name': 'my-libvirt-ds-rl25c', 'namespace': 'default', 'owner_references': [
{'api_version': 'oiaas.org/v1', 'block_owner_deletion': True, 'controller': True, 'kind': 'LibvirtDaemonSet', 'name': 'my-libvirt-ds', 'uid': 'a4b1ea71-5c3b-4d71-bb0e-b5b35c4599e9'}], 'resou
rce_version': '537734', 'self_link': '/api/v1/namespaces/default/pods/my-libvirt-ds-rl25c', 'uid': 'b1e466b8-3b18-407d-b589-ed2f40179c46'}
INFO     kopf.objects:libvirt.py:365 Received pod spec update with diff: (('add', ('spec', 'template', 'spec', 'nodeSelector'), None, {'libvirt': 'yes'}), ('remove', ('spec', 'template', 'sp
ec', 'affinity'), {'nodeAffinity': {'requiredDuringSchedulingIgnoredDuringExecution': {'nodeSelectorTerms': [{'matchFields': [{'key': 'metadata.name', 'operator': 'In', 'values': ['kind-work
er']}]}]}}}, None), ('change', ('spec', 'template', 'spec', 'tolerations'), [{'operator': 'Exists'}, {'operator': 'Exists', 'effect': 'NoSchedule', 'key': 'node.kubernetes.io/disk-pressure'}
, {'operator': 'Exists', 'effect': 'NoSchedule', 'key': 'node.kubernetes.io/memory-pressure'}, {'operator': 'Exists', 'effect': 'NoSchedule', 'key': 'node.kubernetes.io/unschedulable'}, {'op
erator': 'Exists', 'effect': 'NoSchedule', 'key': 'node.kubernetes.io/network-unavailable'}], [{'operator': 'Exists'}]), ('change', ('spec', 'template', 'spec', 'containers'), [{'image': 'ng
inx:1.8.1', 'imagePullPolicy': 'IfNotPresent', 'name': 'nginx', 'ports': [{'containerPort': 80, 'protocol': 'TCP'}], 'resources': {}, 'terminationMessagePath': '/dev/termination-log', 'termi
nationMessagePolicy': 'File'}, {'command': ['/bin/sleep', '36000'], 'image': 'busybox', 'imagePullPolicy': 'IfNotPresent', 'name': 'busybox', 'resources': {'limits': {'memory': '1.74Gi'}, 'r
equests': {'memory': '1.16Gi'}}, 'terminationMessagePath': '/dev/termination-log', 'terminationMessagePolicy': 'File'}], [{'image': 'nginx:1.8.1', 'imagePullPolicy': 'IfNotPresent', 'name':
'nginx', 'ports': [{'containerPort': 80, 'protocol': 'TCP'}], 'resources': {}, 'terminationMessagePath': '/dev/termination-log', 'terminationMessagePolicy': 'File'}, {'command': ['/bin/sleep
', '36000'], 'image': 'busybox', 'imagePullPolicy': 'IfNotPresent', 'name': 'busybox', 'resources': {}, 'terminationMessagePath': '/dev/termination-log', 'terminationMessagePolicy': 'File'}]
))
INFO     kopf.objects:libvirt.py:366 Starting to update pod my-libvirt-ds-rl25c on node kind-worker
INFO     kopf.objects:libvirt.py:374 Received patch: {'spec': {'containers': [{'image': 'nginx:1.8.1', 'imagePullPolicy': 'IfNotPresent', 'name': 'nginx', 'ports': [{'containerPort': 80, 'pr
otocol': 'TCP'}], 'resources': {}, 'terminationMessagePath': '/dev/termination-log', 'terminationMessagePolicy': 'File'}, {'command': ['/bin/sleep', '36000'], 'image': 'busybox', 'imagePullP
olicy': 'IfNotPresent', 'name': 'busybox', 'resources': {}, 'terminationMessagePath': '/dev/termination-log', 'terminationMessagePolicy': 'File'}]}}
INFO     kopf.objects:handling.py:571 Handler 'update_libvirtds/kind-worker' succeeded.
INFO     kopf.objects:handling.py:571 Handler 'update_libvirtds' succeeded.
INFO     kopf.objects:handling.py:329 All handlers succeeded for update.
INFO     kopf.objects:libvirt.py:477 Custom object my-libvirt-ds is scheduled for deletion
INFO     kopf.objects:handling.py:571 Handler 'delete_libvirtds' succeeded.
INFO     kopf.objects:handling.py:329 All handlers succeeded for deletion.
INFO     kopf.objects:libvirt.py:484 Running delete handler for pod my-libvirt-ds-rl25c
INFO     kopf.objects:libvirt.py:495 Pod my-libvirt-ds-rl25c was deleted by k8s cascaded deletion of owner
INFO     kopf.objects:handling.py:571 Handler 'delete_pod' succeeded.
INFO     kopf.objects:handling.py:329 All handlers succeeded for deletion.
ERROR    kopf.reactor.queueing:queueing.py:182 functools.partial(<function resource_handler at 0x7f6ba6180ef0>, lifecycle=<function asap at 0x7f6ba6175ef0>, registry=<kopf.toolkits.legacy_re
gistries.SmartGlobalRegistry object at 0x7f6ba40ca710>, memories=<kopf.structs.containers.ResourceMemories object at 0x7f6b9fe15a50>, resource=Resource(group='', version='v1', plural='pods')
, event_queue=<Queue at 0x7f6ba40ca590 maxsize=0 _getters[1] tasks=10>) failed with an exception. Ignoring the event.
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/queueing.py", line 179, in worker
    await handler(event=event, replenished=replenished)
  File "/usr/local/lib/python3.7/site-packages/kopf/reactor/handling.py", line 223, in resource_handler
    await patching.patch_obj(resource=resource, patch=patch, body=body)
  File "/usr/local/lib/python3.7/site-packages/kopf/clients/auth.py", line 46, in wrapper
    return await fn(*args, **kwargs, context=context)
  File "/usr/local/lib/python3.7/site-packages/kopf/clients/patching.py", line 54, in patch_obj
    raise_for_status=True,
  File "/usr/local/lib/python3.7/site-packages/aiohttp/client.py", line 588, in _request
    resp.raise_for_status()
  File "/usr/local/lib/python3.7/site-packages/aiohttp/client_reqrep.py", line 946, in raise_for_status
    headers=self.headers)
aiohttp.client_exceptions.ClientResponseError: 422, message='Unprocessable Entity', url=URL('https://127.0.0.1:53032/api/v1/namespaces/default/pods/my-libvirt-ds-rl25c')
INFO     kopf.reactor.running:running.py:457 Stop-flag is set to True. Operator is stopping.
PASSED                                                                          

Environment

  • Kopf version: 0.25
  • Kubernetes version: 1.15.3
  • Python version: 3.7.5
  • OS/platform: Linux docker-desktop 4.9.184-linuxkit To-dos #1 SMP Tue Jul 2 22:58:16 UTC 2019 x86_64 GNU/Linux
aiohttp==3.6.2
aiojobs==0.2.2
async-timeout==3.0.1
attrs==19.3.0
cachetools==4.0.0
certifi==2019.11.28
chardet==3.0.4
Click==7.0
google-auth==1.11.0
idna==2.8
importlib-metadata==1.5.0
iso8601==0.1.12
Jinja2==2.11.1
jsonpatch==1.25
jsonpointer==2.0
kopf==0.25
kubernetes==10.0.0
MarkupSafe==1.1.1
more-itertools==8.2.0
multidict==4.7.4
oauthlib==3.1.0
packaging==20.1
pip==19.3.1
pluggy==0.13.1
py==1.8.1
pyasn1==0.4.8
pyasn1-modules==0.2.8
pykube-ng==20.1.0
pyparsing==2.4.6
pytest==5.3.5
pytest-asyncio==0.10.0
python-dateutil==2.8.1
PyYAML==5.3
requests==2.22.0
requests-oauthlib==1.3.0
rsa==4.0
setuptools==41.4.0
six==1.14.0
typing-extensions==3.7.4.1
urllib3==1.25.8
wcwidth==0.1.8
websocket-client==0.57.0
wheel==0.33.6
yarl==1.4.2
zipp==2.2.0

@brutus333 brutus333 added the bug Something isn't working label Feb 10, 2020
@nolar
Copy link
Contributor

nolar commented Feb 13, 2020

Hello. Thanks for this interesting use-case.

I'm quite surprised that it worked with 0.24 and before — it should also fail. There were no changes that could avoid this error.

There is a code for a similar case already — when 404 is returned from patching (see code). However, in your case, it is not 404, but 422. We could catch "422 Unprocessable Entity" the same way — in case it is indeed the case from the Kubernetes API point of view.

It would also be useful to see the full response body from this PATCH request — but this definitely should not be put on the logs.

I will take some time to dive deep into the docs to understand why it is 422. Maybe I can reproduce it locally with the same use-case.

@xavierbaude
Copy link

Hi, I also ran this issue with 0.25 and not with release 0.24. When deleting an Ingress object, I get an error : aiohttp.client_exceptions.ClientResponseError: 422, message='Unprocessable Entity' from k8s api when I delete the Ingress object. It's look like kopf try to patch or read an object that I've just deleted.

@cliffburdick
Copy link

cliffburdick commented May 18, 2020

Hi @nolar, I'm seeing the same thing with an on.delete handler for pods. Do you need help reproducing it? It seems to happen every time the pod is deleted.

Here is a sample:

[2020-05-18 22:02:31,184] nhd.Node             [INFO    ] Removing pod ('mypod-0', 'p09') from node pp-gcomp001.nae07.v3g-pp-compute.viasat.io
[2020-05-18 22:02:31,196] kopf.reactor.queuein [ERROR   ] functools.partial(<function process_resource_event at 0x7f41cba1d430>, lifecycle=<function asap at 0x7f41cbb74820>, registry=<kopf.toolkits.legacy_registries.SmartGlobalRegistry object at 0x7f41cba328b0>, memories=<kopf.structs.containers.ResourceMemories object at 0x7f41ceff7fa0>, resource=Resource(group='', version='v1', plural='pods'), event_queue=<Queue at 0x7f41cf004250 maxsize=0 _getters[1] tasks=12>) failed with an exception. Ignoring the event.
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/kopf/reactor/queueing.py", line 179, in worker
    await processor(event=event, replenished=replenished)
  File "/usr/local/lib/python3.8/site-packages/kopf/reactor/processing.py", line 114, in process_resource_event
    await patching.patch_obj(resource=resource, patch=patch, body=body)
  File "/usr/local/lib/python3.8/site-packages/kopf/clients/auth.py", line 45, in wrapper
    return await fn(*args, **kwargs, context=context)
  File "/usr/local/lib/python3.8/site-packages/kopf/clients/patching.py", line 55, in patch_obj
    await context.session.patch(
  File "/usr/local/lib/python3.8/site-packages/aiohttp/client.py", line 588, in _request
    resp.raise_for_status()
  File "/usr/local/lib/python3.8/site-packages/aiohttp/client_reqrep.py", line 941, in raise_for_status
    raise ClientResponseError(
aiohttp.client_exceptions.ClientResponseError: 422, message='Unprocessable Entity', url=URL('https://10.220.0.17:6443/api/v1/namespaces/p09/pods/mypod-0')
[2020-05-18 22:02:32,495] kopf.reactor.queuein [ERROR   ] functools.partial(<function process_resource_event at 0x7f41cba1d430>, lifecycle=<function asap at 0x7f41cbb74820>, registry=<kopf.toolkits.legacy_registries.SmartGlobalRegistry object at 0x7f41cba328b0>, memories=<kopf.structs.containers.ResourceMemories object at 0x7f41ceff7fa0>, resource=Resource(group='', version='v1', plural='pods'), event_queue=<Queue at 0x7f41cf004250 maxsize=0 _getters[1] tasks=12>) failed with an exception. Ignoring the event.
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/kopf/reactor/queueing.py", line 179, in worker
    await processor(event=event, replenished=replenished)
  File "/usr/local/lib/python3.8/site-packages/kopf/reactor/processing.py", line 114, in process_resource_event
    await patching.patch_obj(resource=resource, patch=patch, body=body)
  File "/usr/local/lib/python3.8/site-packages/kopf/clients/auth.py", line 45, in wrapper
    return await fn(*args, **kwargs, context=context)
  File "/usr/local/lib/python3.8/site-packages/kopf/clients/patching.py", line 55, in patch_obj
    await context.session.patch(
  File "/usr/local/lib/python3.8/site-packages/aiohttp/client.py", line 588, in _request
    resp.raise_for_status()
  File "/usr/local/lib/python3.8/site-packages/aiohttp/client_reqrep.py", line 941, in raise_for_status
    raise ClientResponseError(
aiohttp.client_exceptions.ClientResponseError: 422, message='Unprocessable Entity', url=URL('https://10.220.0.17:6443/api/v1/namespaces/p09/pods/mypod-0')

@cliffburdick
Copy link

@nolar / @brutus333 I think I see what's happening. Like the OP, I have a controller for a CRD that creates pods. I think what should happen is kopf would add the finalizer to those pods after they're created, but I don't see that happening. Instead, when the pod is deleted kopf tries to add the finalizer:

{'metadata': {'finalizers': ['kopf.zalando.org/KopfFinalizerMarker']}}

This returns a 422 error code because the pod is already in the terminating state, and this can be reproduced using kubectl as well. I couldn't find the general rule of when kopf is supposed to add the finalizers, but I would think it's before the deletion.

@cliffburdick
Copy link

A bit more information: it looks like the pod did indeed have the finalizer handle to begin with. The on.delete handler is being called when the pod is deleted, and the finalizer is removed (correctly). But for some reason there's a second event getting fired that triggers a re-tag of the pod's finalizer, which fails since it's no longer here. Here's an example:


[2020-05-26 19:18:54,707] kopf.objects         [DEBUG   ] [p09/chim-0] Invoking handler 'TriadPodDelete'. <--- on.delete handler
[2020-05-26 19:18:54,708] nhd.TriadController  [INFO    ] Saw deleted Triad pod p09.chim-0
[2020-05-26 19:18:54,708] nhd.TriadController  [INFO    ] TriadSet this pod belonged to was deleted. Not restarting pod
[2020-05-26 19:18:54,710] kopf.objects         [INFO    ] [p09/chim-0] Handler 'TriadPodDelete' succeeded.
[2020-05-26 19:18:54,711] kopf.objects         [INFO    ] [p09/chim-0] All handlers succeeded for deletion.
[2020-05-26 19:18:54,713] kopf.objects         [DEBUG   ] [p09/chim-0] Removing the finalizer, thus allowing the actual deletion.
[2020-05-26 19:18:54,713] kopf.objects         [DEBUG   ] [p09/chim-0] Patching with: {'metadata': {'finalizers': []}}
[2020-05-26 19:18:54,842] kopf.objects         [DEBUG   ] [p09/chim-0] Adding the finalizer, thus preventing the actual deletion.
[2020-05-26 19:18:54,843] kopf.objects         [DEBUG   ] [p09/chim-0] Patching with: {'metadata': {'finalizers': ['kopf.zalando.org/KopfFinalizerMarker']}}

@nolar
Copy link
Contributor

nolar commented May 26, 2020

@cliffburdick Thank you for this investigation!

Can you please verify this with 0.27rc6 in an isolated environment (because it is a release candidate yet)?


The logic for finalizer addition/removal is located in kopf/reactor/processing.py (link1 & link2). Previously, e.g. in 0.25, it was in kopf/reactor/handling.py (link3 & link4).

The finalizer decisioning logic was significantly reworked in 0.27 RCs (due to a special type of handlers added: daemons & timers), but it is hard to say which cases were or were not solved as a side-effect compared to 0.25.

However, thanks to your investigation, I can make a hypothesis, that in 0.25, the finalizer was added because it used 2 criteria only: a finalizer is needed (there are deletion handlers) AND the finalizer is absent on the object — as seen in the link 3.

It could only work normally if the object is removed instantly after the finalizer is removed, and there are no additional cycles, e.g. with other controllers with their own finalziers.

In 0.27, an additional 3rd criterion was added (as seen in the link 1): deletion_is_ongoing — and if the deletion is indeed ongoing, the finalizer is NOT added even if it seems needed according to the previous two criteria.

So, with some above-zero probability, the issue is solved. But this needs to be verified.

@cliffburdick
Copy link

@nolar sure! I'll try it out and report back. For what it's worth, when this happens requires_finalizer was True, and has_finalizer was False

@cliffburdick
Copy link

@nolar I can confirm that 0.27rc6 indeed fixes the problem!

I did notice a lot more aiohttp traffic to the k8s server while the pod was active compared to 0.25, but I am no longer seeing the 422 error code. I think this one can likely be closed.

@nolar
Copy link
Contributor

nolar commented May 26, 2020

@cliffburdick Regarding the traffic: Can you please create a separate issue with some excerpts and data? Is it in bytes or in rps?

The byte-measured traffic can increase due to double-storage of Kopf's own status: annotations PLUS status — for smooth transitioning. Previously, it was only in status, but Kubernetes's "structural schemas" broke that since K8s 1.16+. This aspect can be configured.

The rps-measured traffic should not be higher than before. In theory. This is worth checking out.

Anyway, I never tested Kopf for performance yet. Maybe, the time comes to start collecting some data & issues for this.

@cliffburdick
Copy link

@nolar sure. It was rps -- the bytes didn't increase much. I had some debug print statements in the aio library from trying to debug this issue, and saw those increase. I'll try to write up more detail.

@nolar
Copy link
Contributor

nolar commented May 26, 2020

@cliffburdick PS: 0.27rc6 is going to be 0.27 in a few days — I have finally finished testing it in action. But test it carefully before upgrading anyway — 0.27 is a huge change, and therefore it is risky (despite all backward compatibility and stability attempted) — and 6 (!) release candidates kind of suggest that it wasn't an easy release.

@cliffburdick
Copy link

@cliffburdick PS: 0.27rc6 is going to be 0.27 in a few days — I have finally finished testing it in action. But test it carefully before upgrading anyway — 0.27 is a huge change, and therefore it is risky (despite all backward compatibility and stability attempted) — and 6 (!) release candidates kind of suggest that it wasn't an easy release.

Great! Luckily I'm still in the testing phase and it's not officially released anyways, so it shouldn't break anything on my end.

@akojima
Copy link

akojima commented Jun 24, 2020

I still see this issue in 0.27, but only when the operator uses a custom finalizer of its own. Whenever the delete handler removes a finalizer, the 422 exception is thrown, after the handler returns.

It's mostly harmless because the delete handler finishes fine and since handlers are supposed to be idempotent anyway, nothing bad happens from the retried delete handler (plus, the extra finalizer is already gone, otherwise I suppose the retries would keep on forever). Still, it would be nice if the exception didn't happen, since that prevents clean test runs and can be confusing when troubleshooting.

Let me know if you'd like me to provide a minimal test case.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants