Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update recover-control-plane.md doc #10844

Closed
user81230 opened this issue Jan 26, 2024 · 2 comments · Fixed by #11155
Closed

Update recover-control-plane.md doc #10844

user81230 opened this issue Jan 26, 2024 · 2 comments · Fixed by #11155
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@user81230
Copy link
Contributor

What would you like to be added

https://github.com/kubernetes-sigs/kubespray/blob/master/docs/recover-control-plane.md

Move any broken etcd nodes into the "broken_etcd" group, make sure the "etcd_member_name" variable is set.
Move any broken control plane nodes into the "broken_kube_control_plane" group.

This manual points that user should move the host from one group to another; however, the only way for this to correctly work is to copy hostnames to proper groups, without removing them from where they were. As mentioned by @floryut in #7198:

Well, as far as I remember (and looking quickly at the code) you should put the etcd in broken_etcd group but not move it out of etcd group.

Took me a deal of time to fix this. Please clarify this in the documentation for others. Thanks!

Why is this needed

documentation is confusing

@user81230 user81230 added the kind/feature Categorizes issue or PR as related to a new feature. label Jan 26, 2024
@floryut
Copy link
Member

floryut commented Jan 26, 2024

Wow this was a while ago 😆
The documentation is indeed confusing.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 25, 2024
user81230 added a commit to user81230/kubespray that referenced this issue May 2, 2024
k8s-ci-robot pushed a commit that referenced this issue May 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants