This quickstart guide contains steps to install the Cluster Stack Operator (CSO) utilizing the Cluster Stack Provider OpenStack (CSPO) to provide ClusterClasses which can be used with the Kubernetes Cluster API to create Kubernetes Clusters.
This section guides you through all the necessary steps to create a workload Kubernetes cluster on top of the OpenStack infrastructure. The guide describes a path that utilizes the clusterctl
CLI tool to manage the lifecycle of a CAPI management cluster and employs kind
to create a local non-production managemnt cluster.
Note that it is a common practice to create a temporary, local bootstrap cluster which is then used to provision a target management cluster on the selected infrastructure.
- Install Docker and kind
- Install kubectl
- Install Helm
- Install clusterctl
- Install go and the package envsubst is required to enable the expansion of variables specified in CSPO and CSO manifests.
- Install jq
Create the kind cluster:
kind create cluster
Transform the Kubernetes cluster into a management cluster by using clusterctl init
and bootstrap it with CAPI and Cluster API Provider OpenStack (CAPO) components:
export CLUSTER_TOPOLOGY=true
export EXP_CLUSTER_RESOURCE_SET=true
clusterctl init --infrastructure openstack
The CSO and CSPO must be directed to the Cluster Stacks repository housing releases for the OpenStack provider. Modify and export the following environment variables if you wish to redirect CSO and CSPO to an alternative Git repository
Be aware that GitHub enforces limitations on the number of API requests per unit of time. To overcome this,
it is recommended to configure a personal access token for authenticated calls. This will significantly increase the rate limit for GitHub API requests.
Fine grained PAT with Public Repositories (read-only)
is enough.
export GIT_PROVIDER_B64=Z2l0aHVi # github
export GIT_ORG_NAME_B64=U292ZXJlaWduQ2xvdWRTdGFjaw== # SovereignCloudStack
export GIT_REPOSITORY_NAME_B64=Y2x1c3Rlci1zdGFja3M= # cluster-stacks
export GIT_ACCESS_TOKEN_B64=$(echo -n '<my-personal-access-token>' | base64 -w0)
Install the envsubst Go package. It is required to enable the expansion of variables specified in CSPO and CSO manifests.
GOBIN=/tmp go install github.com/drone/envsubst/v2/cmd/envsubst@latest
Get the latest CSO release version and apply CSO manifests to the management cluster.
# Get the latest CSO release version
CSO_VERSION=$(curl https://api.github.com/repos/SovereignCloudStack/cluster-stack-operator/releases/latest -s | jq .name -r)
# Apply CSO manifests
curl -sSL https://github.com/SovereignCloudStack/cluster-stack-operator/releases/download/${CSO_VERSION}/cso-infrastructure-components.yaml | /tmp/envsubst | kubectl apply -f -
Get the latest CSPO release version and apply CSPO manifests to the management cluster.
# Get the latest CSPO release version
CSPO_VERSION=$(curl https://api.github.com/repos/SovereignCloudStack/cluster-stack-provider-openstack/releases/latest -s | jq .name -r)
# Apply CSPO manifests
curl -sSL https://github.com/sovereignCloudStack/cluster-stack-provider-openstack/releases/download/${CSPO_VERSION}/cspo-infrastructure-components.yaml | /tmp/envsubst | kubectl apply -f -
The csp-helper chart is meant to create per tenant credentials as well as the tenants namespace where all resources for this tenant will live in.
cloud and secret name default to openstack
.
Example clouds.yaml
clouds:
openstack:
auth:
auth_url: https://api.gx-scs.sovereignit.cloud:5000/v3
application_credential_id: ""
application_credential_secret: ""
region_name: "RegionOne"
interface: "public"
identity_api_version: 3
auth_type: "v3applicationcredential"
helm upgrade -i csp-helper-my-tenant -n my-tenant --create-namespace https://github.com/SovereignCloudStack/openstack-csp-helper/releases/download/v0.3.0/v0.3.0.tgz -f path/to/clouds.yaml
cat <<EOF | kubectl apply -f -
apiVersion: clusterstack.x-k8s.io/v1alpha1
kind: ClusterStack
metadata:
name: clusterstack
namespace: my-tenant
spec:
provider: openstack
name: alpha
kubernetesVersion: "1.29"
channel: stable
autoSubscribe: false
providerRef:
apiVersion: infrastructure.clusterstack.x-k8s.io/v1alpha1
kind: OpenStackClusterStackReleaseTemplate
name: cspotemplate
versions:
- v2
---
apiVersion: infrastructure.clusterstack.x-k8s.io/v1alpha1
kind: OpenStackClusterStackReleaseTemplate
metadata:
name: cspotemplate
namespace: my-tenant
spec:
template:
spec:
identityRef:
kind: Secret
name: openstack
EOF
clusterstack.clusterstack.x-k8s.io/clusterstack created
openstackclusterstackreleasetemplate.infrastructure.clusterstack.x-k8s.io/cspotemplate created
Create and apply cluster.yaml
file to the management cluster:
cat <<EOF | kubectl apply -f -
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: cs-cluster
namespace: my-tenant
labels:
managed-secret: cloud-config
spec:
clusterNetwork:
pods:
cidrBlocks:
- 192.168.0.0/16
serviceDomain: cluster.local
services:
cidrBlocks:
- 10.96.0.0/12
topology:
variables:
- name: controller_flavor
value: "SCS-2V-4-50"
- name: worker_flavor
value: "SCS-2V-4-50"
- name: external_id
value: "ebfe5546-f09f-4f42-ab54-094e457d42ec" # gx-scs
class: openstack-alpha-1-29-v2
controlPlane:
replicas: 1
version: v1.29.3
workers:
machineDeployments:
- class: openstack-alpha-1-29-v2
failureDomain: nova
name: openstack-alpha-1-29-v2
replicas: 3
EOF
cluster.cluster.x-k8s.io/cs-cluster created
Utilize a convenient CLI clusterctl
to investigate the health of the cluster:
clusterctl -n my-tenant describe cluster cs-cluster
Once the cluster is provisioned and in good health, you can retrieve its kubeconfig and establish communication with the newly created workload cluster:
# Get the workload cluster kubeconfig
clusterctl -n my-tenant get kubeconfig cs-cluster > kubeconfig.yaml
# Communicate with the workload cluster
kubectl --kubeconfig kubeconfig.yaml get nodes
$ kubectl --kubeconfig kubeconfig.yaml get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system cilium-8mzrx 1/1 Running 0 7m58s
kube-system cilium-jdxqm 1/1 Running 0 6m43s
kube-system cilium-operator-6bb4c7d6b6-c77tn 1/1 Running 0 7m57s
kube-system cilium-operator-6bb4c7d6b6-l2df8 1/1 Running 0 7m58s
kube-system cilium-p9tkv 1/1 Running 0 6m44s
kube-system cilium-thbc8 1/1 Running 0 6m45s
kube-system coredns-5dd5756b68-k68j4 1/1 Running 0 8m3s
kube-system coredns-5dd5756b68-vjg9r 1/1 Running 0 8m3s
kube-system etcd-cs-cluster-pwblg-xkptx 1/1 Running 0 8m3s
kube-system kube-apiserver-cs-cluster-pwblg-xkptx 1/1 Running 0 8m3s
kube-system kube-controller-manager-cs-cluster-pwblg-xkptx 1/1 Running 0 8m3s
kube-system kube-proxy-54f8w 1/1 Running 0 6m44s
kube-system kube-proxy-8z8kb 1/1 Running 0 6m43s
kube-system kube-proxy-jht46 1/1 Running 0 8m3s
kube-system kube-proxy-mt69p 1/1 Running 0 6m45s
kube-system kube-scheduler-cs-cluster-pwblg-xkptx 1/1 Running 0 8m3s
kube-system metrics-server-6578bd6756-vztzf 1/1 Running 0 7m57s
kube-system openstack-cinder-csi-controllerplugin-776696786b-ksf77 6/6 Running 0 7m57s
kube-system openstack-cinder-csi-nodeplugin-96dlg 3/3 Running 0 6m43s
kube-system openstack-cinder-csi-nodeplugin-crhc4 3/3 Running 0 6m44s
kube-system openstack-cinder-csi-nodeplugin-d7rzz 3/3 Running 0 7m58s
kube-system openstack-cinder-csi-nodeplugin-nkgq6 3/3 Running 0 6m44s
kube-system openstack-cloud-controller-manager-hp2n2 1/1 Running 0 7m9s