Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there a beginner's guide/tutorial to run Kubernete with Kine #124

Open
Steamgjk opened this issue Jun 29, 2022 · 2 comments
Open

Is there a beginner's guide/tutorial to run Kubernete with Kine #124

Steamgjk opened this issue Jun 29, 2022 · 2 comments

Comments

@Steamgjk
Copy link

Steamgjk commented Jun 29, 2022

Update: I think I find the https://github.com/k3s-io/kine/blob/master/examples/minimal.md
Let me first try this tutorial to see whether I can run it.

I have a similar question as #112

Since Kine is supposed to "be ran standalone so any k8s (not just k3s) can use Kine", I am wondering whether you development staff could provide a Quick Start Guide for the readers/users to launch a demo easily, e.g. how to run a small Kubnernete cluster with MySQL/etcd/dqLite as the backend using Kine

I have searched the markdown in this repo, and it seems there is no such tutorials.

@brandond
Copy link
Contributor

brandond commented Jun 29, 2022

Just run kine (either via the binary or the docker image), and then point your Kubernetes distro of choice at it as the etcd datastore. Or use k3s, which already has kine built in to it.

@Steamgjk
Copy link
Author

Steamgjk commented Jun 29, 2022

Just want to give an update: I finally run the MySQL demo successfuly with Kine.
I am using KubeAdm+Kine+MySQL.

Here I list some of the pitfalls you may encounter, as a complementary material to the minimal.md, so as to make it easier for the follow-up readers to run the demo.

I am using a VM in Google Cloud, with Ubuntu 20.04
I assume you have already installed MySQL, and export PASSWORD as the env var. I did not use the MYSQL docker image, instead, I directly apt-get install mysql

(1) The generate-certs.sh can give you some certificates, but the certificates may not work for your machine. I use the certs generated from the script, but then when I launch Kine, it always tells me "x509: certificate is not valid for any names, but wanted to match localhost". I googled for some time but found no solutions to generate the correct certs (openssl command is really complex). If someone finds the proper way to generarte certs, please ping me.
图片

Workaround: Just disable the certs. Run like this

 /home/steam1994/kine/kine --endpoint "mysql://root:$PASSWORD@tcp(127.0.0.1:3306)/kine" --ca-file  ""  --cert-file ""  --key-file ""

图片

Now, we have Kine running with MYSQL.

(2) Next, running KubeAdm. I suggest you follow the instructions in https://github.com/hub-kubernetes/kubeadm-multi-master-setup
Just take a look at Section "Install kubeadm,kubelet and docker on master and worker nodes"

I have installed these items following its instructions.

kubeadm version: &version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.0", GitCommit:"4ce5a8954017644c5420bae81d72b09b735c21f0", GitTreeState:"clean", BuildDate:"2022-05-03T13:44:24Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"}

(3) The yaml in the minimal.md is too complex, here I provide a much simpler one

apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
networking:
  podSubnet: "192.168.0.0/16"
etcd:
    external:
        endpoints:
        - http://127.0.0.1:2379
        # caFile: /etc/kubernetes/pki/etcd/ca.pem
        # certFile: /etc/kubernetes/pki/etcd/etcd.pem
        # keyFile: /etc/kubernetes/pki/etcd/etcd-key.pem
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: "10.128.3.79"
nodeRegistration:
  criSocket: "unix:///var/run/cri-dockerd.sock"

As you can see, I have commented the caFile, certFile, keyFile
Here, the possible pitfall is the criSocket attribute. In some older version of Kube API (e.g. kubeadm.k8s.io/v1beta2), it does not specify this, but in the newer version (e.g. v1beta3), we must specify it.
My VM gives me two options
图片
But only unix:///var/run/cri-dockerd.sock works. You can try running kubeadm init to see which runtime options you have on your VM, and which runtime works for you.

(4) After fixing the previous problems, I came across another problem
图片

If you also encounter this problem, the solution is to run the following command

crictl config runtime-endpoint unix:///var/run/cri-dockerd.sock
crictl config image-endpoint unix:///var/run/cri-dockerd.sock

Of course, you should config the endpoint as the runtime that works on your VM, on my VM, the workable runtime is unix:///var/run/cri-dockerd.sock

After that, everything is fine, the normal logs should be like the follows

root@opensource-nezha:/home/steam1994# kubeadm init --config kubeadm-local.yaml
[init] Using Kubernetes version: v1.24.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local opensource-nezha] and IPs [10.96.0.1 10.128.3.79]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 5.503913 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node opensource-nezha as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node opensource-nezha as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: q56d6h.dd9hp39oyicb79ay
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.128.3.79:6443 --token q56d6h.dd9hp39oyicb79ay \
	--discovery-token-ca-cert-hash sha256:2f821b012b6ea0ef7fff9efbf65b8bec63dc5cb4449bbe0a4db538a68acce020

Then you can open another terminal, and check mysql

图片

As you can see, there is a databse called kine, and there is a table also called kine in the database, and the table has 9 columns, and the Kine has written 1189 rows of records into this table.

The only issue I have not fixed is how to generate correct cert files. But that should not be a big issue. I remember in an earlier time, I have followed the instructions here and successfully run TLS-based etcd with KubeAdm. I feel the cert generation should be similar, but I haven't taken a close look into the difference. The reason for the cert error is probably due to some hard code in the generate-certs.sh. At your convenience @brandond , I hope you can take a look at this issue: why does it cause the error "x509: certificate is not valid for any names, but wanted to match localhost"? Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants