Skip to content

joryirving/home-ops

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

My Homelab Repository ❄️

... automated via Flux, Renovate and GitHub Actions πŸ€–

DiscordΒ Β  TalosΒ Β  KubernetesΒ Β  Renovate

Home-InternetΒ Β  Status-PageΒ Β  Plex

Age-DaysΒ Β  Uptime-DaysΒ Β  Node-CountΒ Β  Pod-CountΒ Β  CPU-UsageΒ Β  Memory-UsageΒ Β  Power-Usage


Overview

This is a monorepository is for my home kubernetes clusters. I try to adhere to Infrastructure as Code (IaC) and GitOps practices using tools like Ansible, Terraform, Kubernetes, Flux, Renovate, and GitHub Actions.

The purpose here is to learn k8s, while practicing Gitops.


β›΅ Kubernetes

There is a template over at onedr0p/cluster-template if you want to try and follow along with some of the practices I use here.

Installation

My clusters are a mix of k3s provisioned overtop bare-metal Debian using the Ansible galaxy role ansible-role-k3s, and talos linux immutable kubernetes OS. This is a semi-hyper-converged cluster, workloads and block storage are sharing the same available resources on my nodes while I have a separate NAS server with ZFS for NFS/SMB shares, bulk file storage and backups.

Core Components

  • actions-runner-controller: self-hosted Github runners
  • cilium: internal Kubernetes networking plugin
  • cert-manager: creates SSL certificates for services in my cluster
  • external-dns: automatically syncs DNS records from my cluster ingresses to a DNS provider
  • external-secrets: managed Kubernetes secrets using Bitwarden.
  • ingress-nginx: ingress controller for Kubernetes using NGINX as a reverse proxy and load balancer
  • longhorn: Cloud native distributed block storage for Kubernetes
  • rook-ceph: Cloud native distributed block storage for Kubernetes
  • sops: managed secrets for Kubernetes, Ansible, and Terraform which are committed to Git
  • spegel: stateless cluster local OCI registry mirror
  • tf-controller: additional Flux component used to run Terraform from within a Kubernetes cluster.
  • volsync: backup and recovery of persistent volume claims

GitOps

Flux watches the clusters in my kubernetes folder (see Directories below) and makes the changes to my clusters based on the state of my Git repository.

The way Flux works for me here is it will recursively search the kubernetes/${cluster}/apps folder until it finds the most top level kustomization.yaml per directory and then apply all the resources listed in it. That aforementioned kustomization.yaml will generally only have a namespace resource and one or many Flux kustomizations. Those Flux kustomizations will generally have a HelmRelease or other resources related to the application underneath it which will be applied.

Renovate watches my entire repository looking for dependency updates, when they are found a PR is automatically created. When some PRs are merged Flux applies the changes to my cluster.

Directories

This Git repository contains the following directories under Kubernetes.

πŸ“ kubernetes
β”œβ”€β”€ πŸ“ main               # main cluster
β”‚   β”œβ”€β”€ πŸ“ apps           # applications
β”‚   β”œβ”€β”€ πŸ“ bootstrap      # bootstrap procedures
β”‚   β”œβ”€β”€ πŸ“ flux           # core flux configuration
β”‚   └── πŸ“ templates      # re-useable components
└── πŸ“ pi                 # pi cluster
    β”œβ”€β”€ πŸ“ apps           # applications
    β”œβ”€β”€ πŸ“ bootstrap      # bootstrap procedures
    β”œβ”€β”€ πŸ“ flux           # core flux configuration
    └── πŸ“ templates      # re-useable components

Flux Workflow

This is a high-level look how Flux deploys my applications with dependencies. Below there are 3 apps postgres, authentik and weave-gitops. postgres is the first app that needs to be running and healthy before authentik and weave-gitops. Once postgres is healthy authentik will be deployed and after that is healthy weave-gitops will be deployed.

graph TD;
  id1>Kustomization: cluster] -->|Creates| id2>Kustomization: cluster-apps];
  id2>Kustomization: cluster-apps] -->|Creates| id3>Kustomization: postgres];
  id2>Kustomization: cluster-apps] -->|Creates| id6>Kustomization: authentik]
  id2>Kustomization: cluster-apps] -->|Creates| id8>Kustomization: weave-gitops]
  id2>Kustomization: cluster-apps] -->|Creates| id5>Kustomization: postgres-cluster]
  id3>Kustomization: postgres] -->|Creates| id4[HelmRelease: postgres];
  id5>Kustomization: postgres-cluster] -->|Depends on| id3>Kustomization: postgres];
  id5>Kustomization: postgres-cluster] -->|Creates| id10[Postgres Cluster];
  id6>Kustomization: authentik] -->|Creates| id7(HelmRelease: authentik);
  id6>Kustomization: authentik] -->|Depends on| id5>Kustomization: postgres-cluster];
  id8>Kustomization: weave-gitops] -->|Creates| id9(HelmRelease: weave-gitops);
  id8>Kustomization: weave-gitops] -->|Depends on| id5>Kustomization: postgres-cluster];
  id9(HelmRelease: weave-gitops) -->|Depends on| id7(HelmRelease: authentik);

Networking

Click to see a high-level network diagram dns

☁️ Cloud Dependencies

While most of my infrastructure and workloads are self-hosted I do rely upon the cloud for certain key parts of my setup. This saves me from having to worry about two things. (1) Dealing with chicken/egg scenarios and (2) services I critically need whether my cluster is online or not.

The alternative solution to these two problems would be to host a Kubernetes cluster in the cloud and deploy applications like HCVault, Vaultwarden, ntfy, and Gatus. However, maintaining another cluster and monitoring another group of workloads is a lot more time and effort than I am willing to put in.

Service Use Cost
Bitwarden Secrets with External Secrets ~$10/yr
Cloudflare Domain and S3 ~$30/yr
GitHub Hosting this repository and continuous integration/deployments Free
Healthchecks.io Monitoring internet connectivity and external facing applications Free
Total: ~$5/mo

πŸ”§ Hardware

Main Kubernetes Cluster

Name Device CPU OS Disk Data Disk RAM OS Purpose
Ayaka Dell 7080mff i5-10500T 480GB SSD 1TB NVME 64GB Talos k8s control-plane
Eula Dell 7080mff i7-10700T 480GB SSD 1TB NVME 64GB Talos k8s control-plane
Ganyu Dell 3080mff i5-10500T 240GB SSD 1TB NVME 64GB Talos k8s control-plane
HuTao Dell 3080mff i5-10500T 480GB SSD 1TB NVME 64GB Talos k8s worker
Navia Dell 3080mff i5-10500T 256GB SSD 1TB NVME 64GB Talos k8s worker
Yelan Dell 3080mff i5-10500T 240GB SSD 1TB NVME 64GB Talos k8s worker

Total CPU: 76 threads Total RAM: 384GB

Pi Kubernetes Cluster

Name Device CPU OS Disk RAM OS Purpose
Acheron Raspberry Pi5 Cortex A76 240GB SSD 8GB Debian k8s control-plane
Himeko Raspberry Pi5 Cortex A76 240GB SSD 8GB Debian k8s control-plane
Jingliu Raspberry Pi4 Cortex A72 256GB SSD 8GB Debian k8s control-plane
Kafka Raspberry Pi4 Cortex A72 240GB SSD 8GB Debian k8s worker

Total CPU: 16 threads Total RAM: 32GB

Supporting Hardware

Name Device CPU OS Disk Data Disk RAM OS Purpose
NAS HP z820 E5-2680v2 32GB USB 500GB NVMe 128GB Unraid NAS/NFS/Backup
DAS Lenovo SA120 - - 56TB - - DAS w/ Parity
Nahida Raspberry Pi4 Cortex A72 120GB SSD - 4GB Fedora IoT DNS/NUT/BWS-Cache
Mika Beelink Mini-S Celeron N5095 1TB M.2 SATA 500GB SSD 16GB Debian "Crash cart"

Networking/UPS Hardware

Device Purpose
Unifi UDM-SE Network - Router
USW-Pro-24-POE Network - Switch
Back-UPS 600 Network - UPS
Unifi USW-Enterprise-24-PoE Server - Switch
Tripp Lite 1500 Server - UPS

⭐ Stargazers

Star History Chart


🀝 Thanks

Big shout out to original cluster-template, and the Home Operations Discord community.

Be sure to check out kubesearch.dev for ideas on how to deploy applications or get ideas on what you may deploy.


πŸ“œ Changelog

See my awful commit history


πŸ” License

See LICENSE