Skip to content

Commit

Permalink
24.0.0+1.27.8: major refactoring (#48)
Browse files Browse the repository at this point in the history
  • Loading branch information
githubixx committed Dec 13, 2023
1 parent 5fa401c commit 18a8078
Show file tree
Hide file tree
Showing 40 changed files with 889 additions and 748 deletions.
3 changes: 0 additions & 3 deletions .ansible-lint

This file was deleted.

47 changes: 47 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,52 @@
# Changelog

## 24.0.0+1.27.8

### PLEASE READ CAREFULLY

This release contains quite a few potential breaking changes! So review carefully before rolling out the new version of this role! A bigger part of the whole changes are related to increase security. While most of the new variables and defaults should be just fine and should just work out of the box side effects might occur.

All the newly introduced or changed variables have detailed comments in [README](https://github.com/githubixx/ansible-role-kubernetes-worker/blob/master/README.md). So please read them carefully!

This refactoring was needed to make it possible to have `githubixx.kubernetes_controller` and `githubixx.kubernetes_worker` deployed on the same host e.g. They were some intersections between the two roles that had to be fixed.

### UPDATE

- update `k8s_release` to `1.27.8`

### BREAKING

- Rename variable `k8s_conf_dir` to `k8s_worker_conf_dir`. Additionally the default value changed from `/usr/lib/kubernetes` to `/etc/kubernetes/worker`.
- Rename variable `k8s_bin_dir` to `k8s_worker_bin_dir`.
- `k8s_worker_binaries` variable is no longer defined in `defaults/main.yml` but in `vars/main.yml`. Since this list is fixed anyways it makes no sense to allow to modify this list.
- `k8s_worker_certificates` variable is no longer defined in `defaults/main.yml` but in `vars/main.yml`. Since this list is fixed anyways it makes no sense to allow to modify this list.
- Introduce variable `k8s_worker_pki_dir`. All certificate files specified in `k8s_worker_certificates` (see `vars/main.yml`) will be stored here. Related to this: Certificate related settings in `k8s_worker_kubelet_conf_yaml` used `k8s_conf_dir` before and now use `k8s_ctl_pki_dir`. That's `clientCAFile`, `tlsCertFile` and `tlsPrivateKeyFile`.
- The default value for `k8s_interface` changed from `tap0` to `eth0`.
- The variable `k8s_config_directory` is gone. It's no longer in use. After the upgrade to this release you can delete this directory (if you accept the new default!) and it's content (make a backup esp. of `admin.kubeconfig` file - just in case!)
- Remove variable `k8s_worker_download_dir` (no longer needed).
- Change default value of `k8s_worker_kubelet_conf_dir` to `{{ k8s_worker_conf_dir }}/kubelet`.
- Change default value of `k8s_worker_kubeproxy_conf_dir` to `{{ k8s_worker_conf_dir }}/kube-proxy`.

### FEATURE

- When downloading the Kubernetes binaries the task checks the SHA512 checksum.
- Introduce `k8s_worker_api_endpoint_host` and `k8s_worker_api_endpoint_port` variables. Previously `kubelet` and `kube-proxy` where configured to connect to the first host in the Ansible `k8s_controller` group and communicate with the `kube-apiserver` that was running there. This was hard-coded and couldn't be changed. If that host was down the K8s worker nodes didn't receive any updates. Now one can install and use a load balancer like `haproxy` e.g. that distributes requests between all `kube-apiserver`'s and takes a `kube-apiserver` out of rotation if that one is down (also see my Ansible [haproxy role](https://github.com/githubixx/ansible-role-haproxy) for that use case). The default is still to use the first host/kube-apiserver in the Ansible `k8s_controller` group. So behaviorwise nothing changed basically.
- Add task to generate `kubeconfig` for `kubelet` service (previously this was a separate [playbook](https://github.com/githubixx/ansible-kubernetes-playbooks/tree/v15.0.0_r1.27.5/kubeauthconfig)).
- Add task to generate `kubeconfig` for `kube-proxy` service (previously this was a separate [playbook](https://github.com/githubixx/ansible-kubernetes-playbooks/tree/v15.0.0_r1.27.5/kubeauthconfig)).

### OTHER CHANGES

- Use `kubernetes.core.*` modules instead of `kubectl` binary
- Fix some `ansible-lint` issues

### MOLECULE

- Updated all files to reflect the changes introduces with this version
- Tasks for creating `kubeconfig` for `kubelet` and `kube-proxy` are no longer needed as they're now part of `kubernetes_worker` role
- Add `haproxy` to Ubuntu 22 hosts to test new `k8s_worker_api_endpoint_host` and `k8s_worker_api_endpoint_port` settings
- Add tasks to install [ansible-role-cni](https://github.com/githubixx/ansible-role-cni) and [ansible-role-runc](https://github.com/githubixx/ansible-role-runc)
- Use `kubernetes.core.k8s_info` module instead of calling `kubectl` binary

## 23.1.2+1.27.5

- rename `githubixx.harden-linux` to `githubixx.harden_linux`
Expand Down
218 changes: 154 additions & 64 deletions README.md

Large diffs are not rendered by default.

102 changes: 72 additions & 30 deletions defaults/main.yml
Original file line number Diff line number Diff line change
@@ -1,17 +1,39 @@
---
# The directory to store the K8s certificates and other configuration
k8s_conf_dir: "/var/lib/kubernetes"
# The base directory for Kubernetes configuration and certificate files for
# everything worker nodes related. After the playbook is done this directory
# contains various sub-folders.
k8s_worker_conf_dir: "/etc/kubernetes/worker"

# The directory to store the K8s binaries
k8s_bin_dir: "/usr/local/bin"
# All certificate files (Private Key Infrastructure related) specified in
# "k8s_worker_certificates" (see "vars/main.yml") will be stored here.
# Owner and group of this new directory will be "root". File permissions
# will be "0640".
k8s_worker_pki_dir: "{{ k8s_worker_conf_dir }}/pki"

# The directory to store the Kubernetes binaries (see "k8s_worker_binaries"
# variable in "vars/main.yml"). Owner and group of this new directory
# will be "root" in both cases. Permissions for this directory will be "0755".
#
# NOTE: The default directory "/usr/local/bin" normally already exists on every
# Linux installation with the owner, group and permissions mentioned above. If
# your current settings are different consider a different directory. But make sure
# that the new directory is included in your "$PATH" variable value.
k8s_worker_bin_dir: "/usr/local/bin"

# K8s release
k8s_release: "1.27.5"
k8s_worker_release: "1.27.8"

# The interface on which the K8s services should listen on. As all cluster
# The interface on which the Kubernetes services should listen on. As all cluster
# communication should use a VPN interface the interface name is
# normally "wg0" (WireGuard),"peervpn0" (PeerVPN) or "tap0".
k8s_interface: "tap0"
#
# The network interface on which the Kubernetes worker services should
# listen on. That is:
#
# - kube-proxy
# - kubelet
#
k8s_interface: "eth0"

# The directory from where to copy the K8s certificates. By default this
# will expand to user's LOCAL $HOME (the user that run's "ansible-playbook ..."
Expand All @@ -20,29 +42,49 @@ k8s_interface: "tap0"
# "/home/da_user/k8s/certs".
k8s_ca_conf_directory: "{{ '~/k8s/certs' | expanduser }}"

# Directory where kubeconfig for Kubernetes worker nodes and kube-proxy
# is stored among other configuration files. Same variable expansion
# rule applies as with "k8s_ca_conf_directory"
k8s_config_directory: "{{ '~/k8s/configs' | expanduser }}"

# K8s worker binaries to download
k8s_worker_binaries:
- kube-proxy
- kubelet
- kubectl
# The IP address or hostname of the Kubernetes API endpoint. This variable
# is used by "kube-proxy" and "kubelet" to connect to the "kube-apiserver"
# (Kubernetes API server).
#
# By default the first host in the Ansible group "k8s_controller" is
# specified here. NOTE: This setting is not fault tolerant! That means
# if the first host in the Ansible group "k8s_controller" is down
# the worker node and its workload continue working but the worker
# node doesn't receive any updates from Kubernetes API server.
#
# If you have a loadbalancer that distributes traffic between all
# Kubernetes API servers it should be specified here (either its IP
# address or the DNS name). But you need to make sure that the IP
# address or the DNS name you want to use here is included in the
# Kubernetes API server TLS certificate (see "k8s_apiserver_cert_hosts"
# variable of https://github.com/githubixx/ansible-role-kubernetes-ca
# role). If it's not specified you'll get certificate errors in the
# logs of the services mentioned above.
k8s_worker_api_endpoint_host: "{% set controller_host = groups['k8s_controller'][0] %}{{ hostvars[controller_host]['ansible_' + hostvars[controller_host]['k8s_interface']].ipv4.address }}"

# Certificate/CA files for API server and kube-proxy
k8s_worker_certificates:
- ca-k8s-apiserver.pem
- ca-k8s-apiserver-key.pem
- cert-k8s-apiserver.pem
- cert-k8s-apiserver-key.pem
# As above just for the port. It specifies on which port the
# Kubernetes API servers are listening. Again if there is a loadbalancer
# in place that distributes the requests to the Kubernetes API servers
# put the port of the loadbalancer here.
k8s_worker_api_endpoint_port: "6443"

# Download directory for archive files
k8s_worker_download_dir: "/opt/tmp"
# OS packages needed on a Kubernetes worker node. You can add additional
# packages at any time. But please be aware if you remove one or more from
# the default list your worker node might not work as expected or doesn't work
# at all.
k8s_worker_os_packages:
- ebtables
- ethtool
- ipset
- conntrack
- iptables
- iptstate
- netstat-nat
- socat
- netbase

# Directory to store kubelet configuration
k8s_worker_kubelet_conf_dir: "/var/lib/kubelet"
k8s_worker_kubelet_conf_dir: "{{ k8s_worker_conf_dir }}/kubelet"

# kubelet settings
#
Expand All @@ -69,7 +111,7 @@ k8s_worker_kubelet_conf_yaml: |
webhook:
enabled: true
x509:
clientCAFile: "{{ k8s_conf_dir }}/ca-k8s-apiserver.pem"
clientCAFile: "{{ k8s_worker_pki_dir }}/ca-k8s-apiserver.pem"
authorization:
mode: Webhook
clusterDomain: "cluster.local"
Expand All @@ -80,14 +122,14 @@ k8s_worker_kubelet_conf_yaml: |
healthzPort: 10248
runtimeRequestTimeout: "15m"
serializeImagePulls: false
tlsCertFile: "{{ k8s_conf_dir }}/cert-{{ inventory_hostname }}.pem"
tlsPrivateKeyFile: "{{ k8s_conf_dir }}/cert-{{ inventory_hostname }}-key.pem"
tlsCertFile: "{{ k8s_worker_pki_dir }}/cert-{{ inventory_hostname }}.pem"
tlsPrivateKeyFile: "{{ k8s_worker_pki_dir }}/cert-{{ inventory_hostname }}-key.pem"
cgroupDriver: "systemd"
registerNode: true
containerRuntimeEndpoint: "unix:///run/containerd/containerd.sock"
# Directory to store kube-proxy configuration
k8s_worker_kubeproxy_conf_dir: "/var/lib/kube-proxy"
k8s_worker_kubeproxy_conf_dir: "{{ k8s_worker_conf_dir }}/kube-proxy"

# kube-proxy settings
k8s_worker_kubeproxy_settings:
Expand Down
12 changes: 8 additions & 4 deletions molecule/default/converge.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,8 @@
hosts: k8s_worker
become: true
gather_facts: true
environment:
K8S_AUTH_KUBECONFIG: "{{ k8s_admin_conf_dir }}/admin.kubeconfig"
tasks:
- name: Include Cilium role
when:
Expand All @@ -25,12 +27,14 @@
ansible.builtin.include_role:
name: githubixx.cilium_kubernetes
vars:
cilium_action: "install"
cilium_action: "install" # noqa var-naming

- name: Setup tooling to make worker nodes usable
hosts: test-assets
become: true
gather_facts: true
environment:
K8S_AUTH_KUBECONFIG: "{{ k8s_admin_conf_dir }}/admin.kubeconfig"
tasks:
- name: Setup tooling
when:
Expand All @@ -40,15 +44,15 @@
- name: Waiting for Cilium to become ready
ansible.builtin.include_tasks:
file: tasks/cilium_status.yml

- name: Control plane nodes should only run Cilium pods
ansible.builtin.include_tasks:
file: tasks/taint_controller_nodes.yml

- name: Install CoreDNS
ansible.builtin.include_tasks:
file: tasks/coredns.yml

- name: Waiting for CoreDNS to become ready
ansible.builtin.include_tasks:
file: tasks/coredns_status.yml

0 comments on commit 18a8078

Please sign in to comment.