Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

handle nodes without UID #1237

Closed
wants to merge 1 commit into from
Closed

handle nodes without UID #1237

wants to merge 1 commit into from

Conversation

freedge
Copy link

@freedge freedge commented May 7, 2024

On a large deployment on OpenStack, with high latency, we observe the mhc controller remediating nodes immediately after their creation. The remediation happens without any log for its cause, and the only path in the code where that is the case is when the node.UID is empty. We do see a node Name in the target string() that is logged, so we know that a node.Name is present at that time.

Change the code so that a node is considered to be existing when it has a name and no UID, even if the circumstances by which this happens are blurry.

Also add a log so that all code paths leading to remediation are clearly seen.

On a large deployment on OpenStack, with high latency, we observe the
mhc controller remediating nodes immediately after their creation.  The
remediation happens without any log for its cause, and the only path in
the code where that is the case is when the node.UID is empty.  We do
see a node Name in the target string() that is logged, so we know that
a node.Name is present at that time.

Change the code so that a node is considered to be existing when it has
a name and no UID, even if the circumstances by which this happens are
blurry.

Also add a log so that all code paths leading to remediation are clearly
seen.

Signed-off-by: François Rigault <frigo@amadeus.com>
@openshift-ci openshift-ci bot requested review from beekhof and JoelSpeed May 7, 2024 16:33
Copy link
Contributor

openshift-ci bot commented May 7, 2024

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign joelspeed for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label May 7, 2024
Copy link
Contributor

openshift-ci bot commented May 7, 2024

Hi @freedge. Thanks for your PR.

I'm waiting for a openshift member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Copy link
Member

@slintes slintes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The intention of the code you identified is to remediate machines which should have a node (noderef on the Machine is set), but the node doesn't exist. And that's probably the correct code place for the node deletion you're observing.

However:

We do see a node Name in the target string() that is logged, so we know that a node.Name is present at that time.

It's a wrong assumption though, that we found the node when we have its name, see inline comment.

I think we have a kind of race condition here: when the noderef on the Machine set, a reconcile is triggered. However, the node might not be in the MHC controller's cache yet, probably because of " large deployment on OpenStack, with high latency". We would need to better deal with that case, maybe with a "retry to get the node once" approach... 🤔

@@ -788,7 +788,8 @@ func (t *target) needsRemediation(timeoutForMachineToHaveNode time.Duration) (bo
}

// the node does not exist
if t.Node != nil && t.Node.UID == "" {
if t.Node != nil && t.Node.UID == "" && t.Node.Name == "" {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will never be true:

  • The node is nil when noderef isn't set on the Machine yet.
  • When noderef is set, and the Node was found, it has a UID.
  • When the Node wasn´t found, the name is set by MHC code based on the noderef, but UID stays empty.

So we can never have a non nil node without name.

See

@freedge
Copy link
Author

freedge commented May 9, 2024

thank you for the review!

retry to get the node once

couldnt we just do

https://github.com/openshift/machine-api-operator/blob/master/pkg/controller/machinehealthcheck/machinehealthcheck_controller.go#L769-L770

if t.Node == nil || t.Node.UID == ""

so that the case where Node is empty is equivalent to the case where the node is not found? (and the logic with nodeStartupTimeout applies)

I had another similar issue on openshift/machine-config-operator#4357 then I think.

Also I did not report this I see our mhc are performing around 50 node healthcheck per second - I'm wondering whether every single machine would trigger a healthcheck to be triggered in the next 5 minutes (assuming that's what configured as unhealthy condition timeout) - in that case a mhc targetting 35 nodes would trigger 35 * 35 checks per 5 minutes.

(I appreciate the time you took to reply to my pr! I can open a OCPBUGS or a customer case if you feel this is more appropriate though. Thanks a ton!).

@slintes
Copy link
Member

slintes commented May 13, 2024

Hi, sorry for the delay, there was some PTO last week.

couldnt we just do if t.Node == nil || t.Node.UID == "" ...

That would make remediation slower for those who really have an issue with the node not being created 🤔

Yes, please create an OCPBUGS issue (Cloud Compute / MachineHealthCheck), I'm afraid this needs some more consideration. Feel free to assign to me, I will discuss with the team then what's the best solution here.

About the nr of healthchecks: changes on MHCs, Machines or Nodes trigger a healthcheck. The unhealthy condition timeout only applies if a node condition matches an unhealthy condition, then another healthcheck is scheduled after the timeout. If this doesn't explain what you see, please create a new issue for it, ideally with MHC logs.

@freedge
Copy link
Author

freedge commented May 13, 2024

@freedge freedge closed this May 13, 2024
@freedge
Copy link
Author

freedge commented May 13, 2024

If this doesn't explain what you see
also opening https://issues.redhat.com/browse/OCPBUGS-33586
I don't know if I can attach data to it myself and might end up creating a case

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants