Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: don't error if pod has no containerStatuses (#4471) #4473

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

ianroberts
Copy link

Summary

kill-host-pods.py iterates over all pods in all namespaces looking for pods that are running on the current node. But if any pods in the iteration do not have containerStatuses in their pod status structure (e.g. if they're Pending waiting for a valid node to schedule to) then this currently causes the script to crash with a KeyError. This PR changes the code to treat a missing containerStatuses the same as an empty one, so the pod is correctly skipped rather than the whole script crashing.

Closes #4471

Changes

  • changed pod["status"]["containerStatuses"] (which raises an error if containerStatuses is missing) to pod["status"].get("containerStatuses") (which simply returns None).

Testing

Tested locally with a copy of the script, I believe the change is trivial enough not to require automated tests beyond this.

Checklist

  • Read the contributions page.
  • Submitted the CLA form, if you are a first time contributor.
  • The introduced changes are covered by unit and/or integration tests.

Notes

In certain circumstances pod["status"] may not have a containerStatuses child property at all (e.g. if the pod is Pending and not yet scheduled to any node).  Guard against this by using .get (returns None) instead of [] (raises KeyError).
Copy link
Member

@neoaggelos neoaggelos left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks a lot for the catch @ianroberts !

@neoaggelos
Copy link
Member

Would you also mind signing the CLA? And I believe we should also backport this change to older branches as well

@ianroberts
Copy link
Author

Would you also mind signing the CLA? And I believe we should also backport this change to older branches as well

I did just sign the CLA, though the email address I've used is a secondary email on my launchpad account (iroberts-usfd) rather than the preferred one.

@neoaggelos
Copy link
Member

The CLA check fails with:

Checking the following users on Launchpad:
- ian@$DOMAIN ✕ (has no Launchpad account)

Can you amend the commit and use the email that you signed the CLA with?

@ianroberts
Copy link
Author

The CLA check fails with:

Checking the following users on Launchpad:
- ian@$DOMAIN ✕ (has no Launchpad account)

Can you amend the commit and use the email that you signed the CLA with?

That's what I mean - the email I signed the CLA with is ian@$DOMAIN, and that email address is associated with my launchpad account, but it was a secondary rather than primary email address.

I've just edited my launchpad profile to swap the primary and secondary addresses, can you run the CLA checker again now?

@ianroberts
Copy link
Author

Sorry, my bad, I had added the address to my Ubuntu one account but not directly to my Launchpad account, it should be there now. But I'm not showing as a member of https://launchpad.net/~contributor-agreement-canonical, maybe I need to fill in the CLA again now the right emails are all linked?

@neoaggelos
Copy link
Member

Yes, can you please sign the CLA with the email address used in the commit? Thank you!

@ianroberts
Copy link
Author

ianroberts commented Apr 1, 2024

I already had, twice... just filled in the form again, let's see if it takes this time.

Edit: I see it's at least found the matching LP account for my email now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

KeyError in kill-host-pods.py
2 participants