Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add support for the OpenRC as init system #1295

Closed
btrepp opened this issue Dec 3, 2018 · 55 comments · Fixed by kubernetes/kubernetes#73101
Closed

add support for the OpenRC as init system #1295

btrepp opened this issue Dec 3, 2018 · 55 comments · Fixed by kubernetes/kubernetes#73101
Labels
area/ecosystem help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/node Categorizes an issue or PR as relevant to SIG Node.
Milestone

Comments

@btrepp
Copy link

btrepp commented Dec 3, 2018

EDIT by neolit123:

the init system is already supported yet kubeadm still is assuming systemd in paths and messages:
see:
#1295 (comment)

also see this workaround:
#1295 (comment)


BUG REPORT

Looks like alpine linux init system isn't supported by kubeadm.
It seems to write messages about this and continue on, but I assume it doesn't configure a service,
thus it never starts, and can't finish.

Would be awesome if we could host a kubernetes cluster on alpine.

Versions

kubeadm version (use kubeadm version):

kubeadm version: &version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"archive", BuildDate:"2018-11-15T16:26:01Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Kubernetes version (use kubectl version):
    Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"archive", BuildDate:"2018-11-15T16:26:01Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
    The connection to the server localhost:8080 was refused - did you specify the right host or port?

  • Cloud provider or hardware configuration:
    HyperV on windows

  • OS (e.g. from /etc/os-release):
    NAME="Alpine Linux"
    ID=alpine
    VERSION_ID=3.8.1
    PRETTY_NAME="Alpine Linux v3.8"
    HOME_URL="http://alpinelinux.org"
    BUG_REPORT_URL="http://bugs.alpinelinux.org"

  • Kernel (e.g. uname -a):
    Linux kubemanager1 4.14.84-0-virt kubeadm join on slave node fails preflight checks #1-Alpine SMP Thu Nov 29 10:58:53 UTC 2018 x86_64 Linux

  • Others:

What happened?

kubeadm init failed to start a kubelet thus failed to run

What you expected to happen?

kubeadm to init correctly

How to reproduce it (as minimally and precisely as possible)?

kubeadm init

Anything else we need to know?

docker ps -a returns nothing. No container was ever started

kubeadm init
[init] using Kubernetes version: v1.12.3
[preflight] running pre-flight checks
[WARNING Firewalld]: no supported init system detected, skipping checking for services
[WARNING HTTPProxy]: Connection to "https://10.1.1.20" uses proxy "http://10.1.1.1:3128". If that is not intended, adjust your proxy settings
[WARNING HTTPProxyCIDR]: connection to "10.96.0.0/12" uses proxy "http://10.1.1.1:3128". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
[WARNING Service-Docker]: no supported init system detected, skipping checking for services
[WARNING FileExisting-ebtables]: ebtables not found in system path
[WARNING FileExisting-ethtool]: ethtool not found in system path
[WARNING FileExisting-socat]: socat not found in system path
[WARNING FileExisting-tc]: tc not found in system path
[WARNING Service-Kubelet]: no supported init system detected, skipping checking for services
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[preflight] no supported init system detected, won't make sure the kubelet not running for a short period of time while setting up configuration for it.
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] no supported init system detected, won't make sure the kubelet is running properly.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [kubemanager1 localhost] and IPs [10.1.1.20 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [kubemanager1 localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kubemanager1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.1.1.20]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster

@kad
Copy link
Member

kad commented Dec 3, 2018

Please first fix warnings that kubeadm is providing to you. E.g. start with defining proper value for NO_PROXY environment variable, then make sure that all needed binaries are present on the system (tc,ebtables,...) and then check what is in kubelet's status and logs.

@kad
Copy link
Member

kad commented Dec 3, 2018

/assign

@neolit123 neolit123 added area/ecosystem priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Dec 3, 2018
@btrepp
Copy link
Author

btrepp commented Dec 4, 2018

With all warnings apart from not having a supported init system detected. Still has the same issue.

kubeadm init
I1204 10:42:06.894219 7292 version.go:236] remote version is much newer: v1.13.0; falling back to: stable-1.12
[init] using Kubernetes version: v1.12.3
[preflight] running pre-flight checks
[WARNING Firewalld]: no supported init system detected, skipping checking for services
[WARNING Service-Docker]: no supported init system detected, skipping checking for services
[WARNING Service-Kubelet]: no supported init system detected, skipping checking for services
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[preflight] no supported init system detected, won't make sure the kubelet not running for a short period of time while setting up configuration for it.
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] no supported init system detected, won't make sure the kubelet is running properly.
[certificates] Using the existing etcd/server certificate and key.
[certificates] Using the existing apiserver-etcd-client certificate and key.
[certificates] Using the existing etcd/peer certificate and key.
[certificates] Using the existing etcd/healthcheck-client certificate and key.
[certificates] Using the existing apiserver certificate and key.
[certificates] Using the existing apiserver-kubelet-client certificate and key.
[certificates] Using the existing front-proxy-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Using the existing sa key.
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster

@btrepp
Copy link
Author

btrepp commented Dec 7, 2018

Also not supporting the init system(openrc) is totally understandable, maybe an improvement here is just some documentation of the supported init systems (or just saying that it only supports systemd if that's the case)

@kad
Copy link
Member

kad commented Dec 7, 2018

Can you share what is in the logs for kubelet and in docker containers (if any are running after kubeadm error messages)

@btrepp
Copy link
Author

btrepp commented Dec 7, 2018

Hi kad, as far as I can tell there is no kubelet process running, and no containers are ever started.

I know little about kubeadms internals, but it appears it wants to configure a service at the beginning (eg systemd), can't find a supported init system, so skips it, but later on is waiting for that init system to have started the kubelet.

ps
PID USER TIME COMMAND
1 root 0:00 /sbin/init
2 root 0:00 [kthreadd]
4 root 0:00 [kworker/0:0H]
5 root 0:00 [kworker/u64:0]
6 root 0:00 [mm_percpu_wq]
7 root 0:00 [ksoftirqd/0]
8 root 0:00 [rcu_sched]
9 root 0:00 [rcu_bh]
10 root 0:00 [migration/0]
11 root 0:00 [watchdog/0]
12 root 0:00 [cpuhp/0]
13 root 0:00 [kdevtmpfs]
14 root 0:00 [netns]
16 root 0:00 [oom_reaper]
174 root 0:00 [writeback]
175 root 0:00 [kworker/0:1]
176 root 0:00 [kcompactd0]
178 root 0:00 [ksmd]
179 root 0:00 [crypto]
180 root 0:00 [kintegrityd]
182 root 0:00 [kblockd]
445 root 0:00 [ata_sff]
454 root 0:00 [md]
460 root 0:00 [watchdogd]
585 root 0:00 [kauditd]
591 root 0:00 [kswapd0]
679 root 0:00 [kthrotld]
911 root 0:00 [hv_vmbus_con]
1182 root 0:00 [scsi_eh_0]
1255 root 0:00 [scsi_tmf_0]
1264 root 0:00 [kworker/u64:3]
1406 root 0:00 [jbd2/sda3-8]
1407 root 0:00 [ext4-rsv-conver]
1821 root 0:00 [hv_balloon]
1874 root 0:00 [ipv6_addrconf]
1965 root 0:00 [kworker/0:1H]
2235 root 0:00 /sbin/syslogd -Z
2289 root 0:00 /sbin/acpid
2318 chrony 0:00 /usr/sbin/chronyd -f /etc/chrony/chrony.conf
2345 root 0:00 /usr/sbin/crond -c /etc/crontabs
2447 root 0:06 /usr/bin/dockerd -p /run/docker.pid
2480 root 0:00 /usr/sbin/sshd
2485 root 0:00 /sbin/getty 38400 tty1
2486 root 0:00 /sbin/getty 38400 tty2
2489 root 0:00 /sbin/getty 38400 tty3
2491 root 0:00 /sbin/getty 38400 tty4
2495 root 0:00 /sbin/getty 38400 tty5
2498 root 0:00 /sbin/getty 38400 tty6
2507 root 0:00 sshd: root@pts/0
2509 root 0:00 -ash
2514 root 0:00 docker-containerd --config /var/run/docker/containerd/containerd.toml
2964 root 0:00 [kworker/0:0]
3064 root 0:00 sshd: root@pts/1
3066 root 0:00 -ash
3241 root 0:00 [kworker/u64:1]
3311 root 0:00 [kworker/0:2]
3314 root 0:00 /sbin/getty -L 115200 ttyS0 vt100
3315 root 0:00 ps

docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

@timothysc timothysc added this to the Next milestone Jan 7, 2019
@timothysc timothysc added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Jan 7, 2019
@foxyriver
Copy link

only support systemd and wininit init system. you could install kubelet manually and delete the code installing kubelet configuration file in kubeadm, mybe it works

@neolit123
Copy link
Member

is alpine linux a target for us?
we might rely on the community to patch it.

@bryanhuntesl
Copy link

is alpine linux a target for us?
we might rely on the community to patch it.

Alpine Linux is a very popular target for containers - due to it's extremely small size/install - also rather popular for Vagrant/EC2 - I'm surprised it's not supported. Grepped through the kubedm code - seems like it's just messing with systemd in order to start docker/kubernetes stuff.

Is there a document describing what kubeadm does / intends to do / depends upon from the init system?

@neolit123 neolit123 added kind/feature Categorizes issue or PR as related to a new feature. sig/node Categorizes an issue or PR as relevant to SIG Node. and removed priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Jan 8, 2019
@neolit123
Copy link
Member

neolit123 commented Jan 8, 2019

Is there a document describing what kubeadm does / intends to do / depends upon from the init system

on Linux it uses systemd to start / stop the kubelet:
https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/phases/kubelet/kubelet.go

this document partially outlines the kubeadm / systemd interaction:
https://kubernetes.io/docs/setup/independent/kubelet-integration/#configure-kubelets-using-kubeadm

https://wiki.alpinelinux.org/wiki/Alpine_Linux_Init_System

Alpine Linux uses OpenRC for its init system.

this init system is not supported by core kubernetes.
in this case kubeadm uses what is available in the core.

[WARNING Service-Docker]: no supported init system detected, skipping checking for services
[WARNING Service-Kubelet]: no supported init system detected, skipping checking for services

this comes from here:
https://github.com/kubernetes/kubernetes/blob/master/pkg/util/initsystem/initsystem.go#L178
and new init systems have to be added there.

@neolit123 neolit123 changed the title Kubeadm init fails on alpine 3.8 Alpine Linux requires support for OpenRC as init system Jan 8, 2019
@neolit123
Copy link
Member

/assign @timothysc @detiber
for judgement on this one.

@detiber
Copy link
Member

detiber commented Jan 8, 2019

Has someone already packaged the necessary binaries (and the required init scripts) for Alpine? If so, I don't see an issue with adding proper support for managing services correctly. If not, then I would consider that a prerequisite for this to proceed, since the management of init scripts/config isn't the responsibility of kubeadm.

bryanhuntesl pushed a commit to binarytemple/vagrant-alpine64-k8s that referenced this issue Jan 9, 2019
@rosti
Copy link

rosti commented Jan 9, 2019

There seems to be a single kubernetes package here.

@detiber
Copy link
Member

detiber commented Jan 9, 2019

@rosti Looking at the contents of that package it basically looks like a dump of multiple k8s binaries and does not include an init script or config required to be driven by kubeadm.

@bcdurden
Copy link

I'm normally a lurker. But there's industry interest in Kubernetes on the Edge using ARM and various bare metal options are being investigated with Alpine being in the mix of OS choices.

I think OpenRC support in kubeadm is kind of a must-have, I'm not certain Alpine's community is going to put forward a patch that 'fixes' something so fundamental to the OS's claim to fame.

@bryanhuntesl
Copy link

I'm normally a lurker. But there's industry interest in Kubernetes on the Edge using ARM and various bare metal options are being investigated with Alpine being in the mix of OS choices.

I think OpenRC support in kubeadm is kind of a must-have, I'm not certain Alpine's community is going to put forward a patch that 'fixes' something so fundamental to the OS's claim to fame.

I strongly suspect you are correct - with the memory/image size they're targeting - I really can't see them going the (no disrespect to) systemd route.

@bcdurden
Copy link

bcdurden commented Jan 14, 2019

Is there a document describing what kubeadm does / intends to do / depends upon from the init system

on Linux it uses systemd to start / stop the kubelet:
https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/phases/kubelet/kubelet.go

this document partially outlines the kubeadm / systemd interaction:
https://kubernetes.io/docs/setup/independent/kubelet-integration/#configure-kubelets-using-kubeadm

https://wiki.alpinelinux.org/wiki/Alpine_Linux_Init_System

Alpine Linux uses OpenRC for its init system.

this init system is not supported by core kubernetes.
in this case kubeadm uses what is available in the core.

[WARNING Service-Docker]: no supported init system detected, skipping checking for services
[WARNING Service-Kubelet]: no supported init system detected, skipping checking for services

this comes from here:
https://github.com/kubernetes/kubernetes/blob/master/pkg/util/initsystem/initsystem.go#L178
and new init systems have to be added there.

At first glance, this actually looks quite straight-forward. I'm not a Go-afficionado by any means, but it appears to just be making direct calls to a shell. Adding another implementor to that InitSystem interface that works for OpenRC plus the openrc service script would probably do it.

EDIT:
Diving down in, getting Kubernetes onto Alpine-ARM is going to require some work. Running kubelet manually is possible, but after significant time debugging I suspect that there's a networking issue afoot as the apiserver is failing to sync with etcd when doing a basic init with kubeadm.

@rosti
Copy link

rosti commented Jan 15, 2019

@detiber you are correct there. But some package is better than no package. This means that we have a maintainer we can ping with a specific proposal.

@neolit123 neolit123 added lifecycle/active Indicates that an issue or PR is actively being worked on by a contributor. and removed help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. labels May 31, 2019
@neolit123
Copy link
Member

marked as active as @oz123 mentioned that he can look at followup changes.

@mrueg
Copy link
Member

mrueg commented Jul 27, 2019

Since above PR mentions partial support, what is currently missing for full support @oz123 ?

@oz123
Copy link

oz123 commented Jul 28, 2019

@mrueg what's missing is almost everything we discussed in this thread. I currently lack the time to complete the work, if someone would like to sponsor it feel free to contact me. If another person wants to take over this work I am also fine with that.

@neolit123 neolit123 added help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. and removed lifecycle/active Indicates that an issue or PR is actively being worked on by a contributor. labels Oct 10, 2019
@neolit123 neolit123 changed the title Alpine Linux requires support for OpenRC as init system add support for the OpenRC as init system Oct 13, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 11, 2020
@neolit123
Copy link
Member

/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 11, 2020
@neolit123
Copy link
Member

this is a pending openrc issue: #1986

@kamikaze
Copy link

unable to run on alpine as well :(

@PhoenixMage
Copy link

I am using k8s on Alpine (a single cluster running on x86_64, armv7 and aarch64) as a work around when joining a node to a cluster I manually restart kubelet when it fails, seems to only be needed to be done once.

@xphoniex
Copy link

xphoniex commented May 8, 2020

I managed to start a cluster the other day with some help from @neolit123 .

Furthermore, to complete this I need some co-operation from @fcolista to fix the alpine side of things. I already contacted you but got no response.

Otherwise everything from kubeadm side works fine now and this issue can be closed once my PR is merged.

@fcolista
Copy link

fcolista commented May 8, 2020

@xphoniex I'm avail to help.
I prefer that your patch is applied upstream...at the moment Alpine version is stucked at 1.17.3, since 1.18 does not build with go 1.13.
Let me know what kind of help/cooperation you need from my side.
Thanks!

@xphoniex
Copy link

xphoniex commented May 8, 2020

@fcolista
Copy link

fcolista commented May 8, 2020

@xphoniex they have been merged
.: Francesco

@xphoniex
Copy link

we can close this too now that kubernetes/kubernetes#90892 is merged @neolit123 , yes?

@neolit123
Copy link
Member

i think this is the final remaining item:
#1295 (comment)

@xphoniex
Copy link

Alpine already uses /etc/ for services, so we kept the config files there too.

We only had to update the flags in kubelet.confd and kubelet.initd for kubernetes package to let OpenRC know where the rest of config files were, you can see the diff here.

Notice for example that we made --cni-bin-dir=/usr/share/cni-plugins/bin as per Francesco's suggestion whereas on other distros we expect the binaries to be in /opt/cni/bin.

@neolit123
Copy link
Member

understood, this is great news and i'm going to close this ticket (finally).
/close

a couple of FYI WRT service files:

  • update to the 10-kubeadm.conf file is imminent at this point, yet not clear when, maybe in +3, maybe +5 releases.
    the kubelet is removing all it's flags in favor of using configuration file values via --config. when this happens we are going to stop sourcing kubeadm-flags.env and /etc/default/kubelet in 10-kubeadm.conf and kubeadm will stop generating kubeadm-flags.env on runtime.

  • dockershim which is the CRI implementation for docker is moving outside of the kubelet source code into a separate git repository and a separate service. so docker users will have to run it separately before starting the kubelet service. unclear what is the userbase for docker on alpine, but overall docker for kubeadm users is 70% as per a survey we did a couple of years ago.

@k8s-ci-robot
Copy link
Contributor

@neolit123: Closing this issue.

In response to this:

understood, this is great news and i'm going to close this ticket (finally).
/close

a couple of FYI WRT service files:

  • update to the 10-kubeadm.conf file is imminent at this point, yet not clear when, maybe in +3, maybe +5 releases.
    the kubelet is removing all it's flags in favor of using configuration file values via --config. when this happens we are going to stop sourcing kubeadm-flags.env and /etc/default/kubelet in 10-kubeadm.conf and kubeadm will stop generating kubeadm-flags.env on runtime.

  • dockershim which is the CRI implementation for docker is moving outside of the kubelet source code into a separate git repository and a separate service. so docker users will have to run it separately before starting the kubelet service. unclear what is the userbase for docker on alpine, but overall docker for kubeadm users is 70% as per a survey we did a couple of years ago.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/ecosystem help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
None yet
Development

Successfully merging a pull request may close this issue.