You won’t be able to connect just yet, but that shouldn’t stop you from getting your local machine or remote management server set up to manage your cluster. In either case, the management machine will need:
- kubectl, the official Kubernetes command-line tool, which you’ll use to connect to the cluster
- The cluster configuration file, which contains authentication certificates
The Kubernetes project provides detailed directions for installation on a variety of platforms. Use kubectl version
to make sure that your install is working and within one minor version of your cluster.
If you are on macOS and using Homebrew package manager, you can install kubectl with Homebrew.
-
Run the installation command:
brew install kubernetes-cli
-
Test to ensure the version you installed is sufficiently up-to-date:
kubectl version
Once you’ve installed kubectl, next download its config file with the button below. You can place the config file anywhere on the machine where you run kubectl
and invoke it with the --kubeconfig
option. By convention, Kubernetes config files are stored in hidden folder in your home directory named .kube
To test that the file authenticates successfully, you can use the following command from within the .kube directory:
kubectl --kubeconfig="cluster1-kubeconfig-dupe.yaml" get nodes
If you are using kubectl from elsewhere on the filesystem, supply the full path to the config file.
When the command is successful, it should return information similar to the following, although the details will vary depending on the specific cluster configuration:
NAME STATUS ROLES AGE VERSION
worker-9511 Ready <none> 3d v1.10.7
worker-9512 Ready <none> 3d v1.10.7
worker-9513 Ready <none> 3d v1.10.7
Once kubectl and the cluster configuration file are in place, you can create, manage, and deploy clusters. From here, you can add DigitalOcean Load Balancers and block storage volumes to your cluster.
In Kubernetes there’s various types of workloads you can deploy. Below you can find 4 different example manifests that can be deployed to your cluster. Copy the example manifest to a file on your workstation and use kubectl to apply it.
kubectl create -f ./my-manifest.yaml
CopyCopy
Deployments describe a set of identical Pods without unique identities. A Deployment will run multiple replicas of your application and will automatically replace instances that fail or become unresponsive.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-example
spec:
replicas: 1
selector:
matchLabels:
app: nginx-deployment-example
template:
metadata:
labels:
app: nginx-deployment-example
spec:
containers:
- name: nginx
image: library/nginx
CopyCopy
A Cron Job creates Jobs on a time-based schedule. One CronJob object is like one line of a crontab (cron table) file. It runs a job periodically on a given schedule, written in Cron format.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: cronjob-example
spec:
schedule: '*/5 * * * *'
jobTemplate:
spec:
template:
spec:
containers:
- name: cronjob-example
image: busybox
args:
- /bin/sh
- '-c'
- echo This is an example cronjob running every five minutes
restartPolicy: OnFailure
CopyCopy
A Pod is the basic building block of Kubernetes–the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod represents a running process on your cluster.
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod-example
spec:
containers:
- name: nginx-pod-example
image: library/nginx
CopyCopy
A ReplicaSet ensures that a specified number of pod replicas are running at any given time. You can specify how many replicas of the pod that should be running by editing the 'replicas' key in the example below:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: nginx-replicaset-example
spec:
replicas: 1
selector:
matchLabels:
app: nginx-replicaset-example
template:
metadata:
labels:
app: nginx-replicaset-example
spec:
containers:
- name: nginx-replicaset-example
image: library/nginx
When you need to write and access persistent data in a Kubernetes cluster, you can create and access DigitalOcean block storage volumes by creating a PersistentVolumeClaim as part of your deployment.
The claim can allow cluster workers to read and write database records, user-generated website content, log files, and other data that should persist after a process has completed.
The example configuration defines two types of objects:
- The
PersistentVolumeClaim
calledcsi-pvc
which is responsible for locating the block storage volume by name if it already exists and creating the volume if it does not. - The Pod named
my-csi-app
, which will create containers, then add a mountpoint to the first object and mount the volume there.
Continue on to define the Persistent Volume Claim.
The DigitalOcean Cloud Controller supports provisioning DigitalOcean Load Balancers in a cluster’s resource configuration file.
The example configuration will define a load balancer and create it if one with the same name does not already exist.
You can add an external load balancer to a cluster by creating a new configuration file or adding the following lines to your existing service config file. Note that both the type and ports values are required for type: LoadBalancer:
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 3000
name: http
CopyCopy
Continue on to see how this might look like in the context of a service file.