Skip to content

GKE K8s tutorial

Eric Promislow edited this page Mar 13, 2019 · 20 revisions

Prerequisite steps:

Installing tools

You will need a few tools to interact with GKE, which you can install/setup by performing the following steps:

  1. brew cask install google-cloud-sdk
  2. gcloud components install kubectl
  3. gcloud auth login
  4. Make sure you have docker installed and running - a good way to check is to run docker ps, which should return without error and display at least some columns

Connecting to your GKE cluster

You will be using the kubectl cli to, well, ConTroL your KUBErnetes. Much like the cf cli, it keeps track of config and auth in a file in your home directory, ~/.kube/config. To connect to your cluster (the equivalent of cf api):

  1. gcloud container clusters get-credentials <cluster-name> --zone us-central1-a --project cf-capi-arya (the cluster-name will be capi-<YOUR_NAME>)
  2. kubectl config current-context will confirm that you are correctly targeted
  3. try running kubectl api-versions, which is roughly the equivalent of cf curl /v2/info

Do some learning

Check out this video: https://www.youtube.com/watch?v=4ht22ReBjno

Or if you prefer the written word, https://www.digitalocean.com/community/tutorials/an-introduction-to-kubernetes

Deploying your first app

Follow along here - notice some familiar terms that mean new things?

It may be nice to run watch 'kubectl get pods -l app=nginx' in another window as you are applying changes, so you can see them happen in "real" time. (side note - recognize the syntax in the -l param? You guessed it, it's a label selector 🎉)

Scaling

Try scaling the # of replicas: kubectl scale --replicas=10 deployment/nginx-deployment (what does the --current-replicas flag do?)

Mapping a...route?

You may have noticed that by default, your deployment of nginx is not externally routable. To create a load balancer and expose that nginx to the internets, you can run:

  1. kubectl expose deployment nginx-deployment --type=LoadBalancer --name=example-nginx-service
  2. watch 'kubectl describe services example-nginx-service' until you see an EnsuredLoadBalancer event - you'll also notice that a new field, LoadBalancer Ingress appears.
  3. Go to http://<LoadBalancer Ingress>:<TargetPort> to see your app, live on the internets

For more about services, see https://kubernetes.io/docs/concepts/services-networking/service/

Bonus: create your service declaratively instead of using a kubectl expose command

Deployments

Before you delete the deployment, check out how kubernetes thinks about revisions:

  1. kubectl rollout history deployment.v1.apps/nginx-deployment
  2. kubectl rollout history deployment.v1.apps/nginx-deployment --revision=1
  3. kubectl rollout history deployment.v1.apps/nginx-deployment --revision=2
  4. You can roll back to revision 1 using the following command: kubectl rollout undo deployment.v1.apps/nginx-deployment && kubectl rollout status deployment.v1.apps/nginx-deployment, which will wait for the rollback to complete before exiting
  5. now run kubectl rollout history deployment.v1.apps/nginx-deployment - where did revision 1 go?
  6. to rollout a specific revision, you can use kubectl rollout undo deployment.v1.apps/nginx-deployment --to-revision=2

For more on deployments in kubernetes, check out https://kubernetes.io/docs/concepts/workloads/controllers/deployment/

Using cloud-native buildpacks and the pack cli to replicate the staging experience

Installing pack

  1. brew tap buildpack/tap && brew install pack to install the cli
  2. gcloud auth configure-docker to tell docker to get creds from your google cloud account to allow you to upload container images to GCR, google's container registry

Building an image

For now, the only built-in buildpacks are for java and node. Both of these languages exist as sample apps in CATs, so go wild (I only tried it with ~/go/src/github.com/cloudfoundry/cf-acceptance-tests/assets/node).

You should be able to follow along with https://buildpacks.io/docs/using-pack/building-app/ to figure out how to build your app. Make sure you remember to map ports! To run it locally, you can do docker run -p <external port you will use to access from outside the container>:<internal port your app is listening on> <image name>

Pushing the image to GCR

See https://cloud.google.com/container-registry/docs/pushing-and-pulling#pushing_an_image_to_a_registry - you'll probably end up with something like us.gcr.io/cf-capi-arya/node-sample:<YOUR_NAME>

Note that unlike the k8s clusters, we are sharing a container registry for this exercise! Which is why I recommend a tag with your name!!

Running that image in GKE

Refer back to the first exercise, where we used the nginx image? Modify that deployment yaml file to instead refer to the image you just build with pack and pushed to GCR, i.e.:

...
containers:
- name: node-sample
  image: us.gcr.io/cf-capi-arya/node-sample:chris
...

"Pushing" changes

Figure out what steps you'd have to follow to get a new version of your code deployed.

How about dora?

Using all you've learned so far, figure out how to get dora up and running by building a docker image containing it, pushing it to GCR, running it on GKE, and exposing it to the internet! Note that the pack cli doesn't support ruby buildpacks yet, so you'll have to find an appropriate docker image and add dora to it.

Extra credit

  1. Map an actual DNS route to your app (can you map DNS to the cluster and use appName.clusterDNS like in cf without too much work?)
  2. Push your dora image to a bosh lite - how is the docker experience currently on cf?
  3. Bind a service to your app (see https://cloud.google.com/kubernetes-engine/docs/how-to/add-on/service-catalog/install-service-catalog)
  4. Try another tutorial that lets you do stuff that's easy for K8s but hard for cf, like stateful apps (https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/)
  5. Play with the autoscaler (https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/)
  6. How could we have used the same cluster without stepping on each other's toes?
  7. Do something with istio
  8. Anything else cool you can think of

Reference

kubectl docs

https://kubernetes.io/docs/reference/kubectl/overview/ https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands https://kubernetes.io/docs/reference/kubectl/cheatsheet/

Rough cf <-> kubectl mapping

cf rough kubectl equivalent
cf push -f kubectl apply -f
cf apps kubectl get deployments
cf app a kubectl get pods -l app=a
cf delete a kubectl delete deployment a-deployment
cf scale a -i 3 kubectl scale --replicas=3 deployment/a-deployment
cf map-route kubectl expose deployment
Clone this wiki locally