Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Timeout limit in Connect() function is leading to crashloopbackoff of controller pod of CSI-Driver. #162

Open
adarsh-dell opened this issue Jan 18, 2024 · 3 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@adarsh-dell
Copy link

adarsh-dell commented Jan 18, 2024

We were going through with this PR and have found that @ConnorJC3 and team has introduced a timeout limit as part of bbcd132.

Can someone please provide insight into the rationale behind implementing this timeout?
Given that many CSI-sidecars rely on this utility, imagine a scenario with two controllers where only one is the leader. In such a case, the non-leading controller is unable to connect. In the previous setup with an infinite timeout, the controllers attempted to establish a connection indefinitely. As a result, the controller pod remained in a running state, avoiding crashloopbackoff.

Once the pod assumed the leader role, the session was established seamlessly but because of this change now pod is in crashloopbackoff.

Tasks

No tasks being tracked yet.
@ConnorJC3
Copy link
Contributor

Hi @adarsh-dell, as explained in #131 that change is to fix an issue where the sidecar attempts to connect to a valid-looking but dead address, in the specific real world case it was a unix socket that had since been replaced.

Can you give more details about your issue? All of the k8s-standard sidecars (external-attacher, external-provisioner, etc) should be able to connect to the CSI driver both when running as a leader and as active standby. This is a common configuration and we've extensively tested with our driver, is there a specific driver or method of reproduction?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 17, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants