Skip to content
This repository has been archived by the owner on May 7, 2024. It is now read-only.

25. Adding Kubernetes Support

eiximenis edited this page Apr 17, 2018 · 1 revision

Now that we have the application running nicely in containers, we can leverage the power of an orchestrator such as Kubernetes. If you have never used Kubernetes before, we recommend that you get a high-level overview of what it is before diving in (things will make a whole lot more sense).

Follow the link to read more about the benefits of an orchestrator, which for the purposes of this demo is Kubernetes.

With that base knowledge to stand on, let's integrate Kubernetes support to this project.

Initial Setup

At the root of our project, let's create a directory to store the files which Kubernetes requires:

mkdir eShopWCFK8s

Inside that directory, we'll create two yml files which describe the kubernetes deployment for the SQL server and WCF service containers.

touch eshop-sql-container-deployment.yml
touch eshop-wcf-container-deployment.yml

Adding Support - SQL Container

Let's walk through chunk by chunk what needs to go into the sql yml file. We start off my specifying the schema of this service. What we define in this file is a deployment, which will create a ReplicaSet to help the deployment orchestrate SQL pod creation, updates, and deletion.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: sql-data

What follows after 'spec' is what defines the deployment. We give this deployment a label.

spec:
  template:
    metadata:
      labels:
        app: sql-data

The next chunk of the manifest will look very similar to the dockerfile we created earlier. We'll tell Kubernetes what image to use for this container, give it a name and set some environmental variables in the container.

At the very bottom we indicate that (for this Pod) we want it to be attached to a Kubernetes Node which is running Windows as the OS. This marks the end of one service definition and the beginning of another.

    spec:
      containers:
      - name: sql-data
        image: microsoft/mssql-server-windows-developer
        env:
        - name: ACCEPT_EULA
          value: "Y"
        - name: SA_PASSWORD
          value: Pass@word
      nodeSelector:
       beta.kubernetes.io/os: windows
---

With our deployment successfully defined, we want to specify another service. Our service type is declared to be a Load Balancer, which allows us to provide an externally accessible IP address. It also distributes traffic among the pods under it's control.

By declaring port and targetPort to be 1433, we are routing traffic in such a way that when someone hit's the external IP at our declared port, it will get routed through to the port that our SQL server in the container is listening on (SQL listens on port 1433 by default).

apiVersion: v1
kind: Service
metadata:
  labels:
    app: sql-data
  name: sql-data
spec:
  type: LoadBalancer
  #loadBalancerIP: 52.187.173.125
  ports:
  - port: 1433
    targetPort: 1433
  selector:
    app: sql-data

Putting it all together, we get a file that looks like this:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: sql-data
spec:
  template:
    metadata:
      labels:
        app: sql-data
    spec:
      containers:
      - name: sql-data
        image: microsoft/mssql-server-windows-developer
        env:
        - name: ACCEPT_EULA
          value: "Y"
        - name: SA_PASSWORD
          value: Pass@word
      nodeSelector:
       beta.kubernetes.io/os: windows
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: sql-data
  name: sql-data
spec:
  type: LoadBalancer
  #loadBalancerIP: 52.187.173.125
  ports:
  - port: 1433
    targetPort: 1433
  selector:
    app: sql-data

Adding Support - WCF Container

The yml file for the WCF container will look fairly similar to the sql container yml file. Let's look at the contents and we'll call out anything which was not present in the sql file.

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: eshop-modernized-wcf
spec:
  replicas: 1
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  minReadySeconds: 5
  template:
    metadata:
      labels:
        app: eshop-modernized-wcf
    spec:
      containers:
      - name: eshop-modernized-wcf
        image: eshop/wcfservice
        ports:
        - containerPort: 80
        imagePullPolicy: Always
        env:
        - name: ConnectionString
          value: "Server=sql-data-for-wcf;Database=eShopDatabase;User Id=sa;Password=Testing11@@"
      nodeSelector:
       beta.kubernetes.io/os: windows
---
apiVersion: v1
kind: Service
metadata:
  name: eshop-modernized-wcf
  labels:
    app: eshop-modernized-wcf
spec:
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: eshop-modernized-wcf

The unique thing to call out in this file is how we specify a "strategy" for this deployment starting at line 7. By defining our strategy to be a "rolling update", we tell Kubernetes how we want it to replace old Pods with new ones. A rolling update means that new Pods will be brought online then old Pods go offline.

The definitions that follow the declaration of this strategy, maxSurge and maxUnavailable, specifies to Kubernetes 1) the max number of Pods that can be created beyond the desired number of Pods and 2) the max number of desired Pods which can go offline while the update happens.

Next Steps

Continue reading to learn about how we can deploy Kubernetes into Azure

Clone this wiki locally