Skip to content

Commit

Permalink
port to kubernetes 1.9
Browse files Browse the repository at this point in the history
  • Loading branch information
irvifa committed Mar 26, 2018
1 parent d9332d4 commit c5661c8
Show file tree
Hide file tree
Showing 5 changed files with 47 additions and 75 deletions.
79 changes: 21 additions & 58 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,9 +37,9 @@ Before deploying the `locust-master` and `locust-worker` controllers, update eac
- name: TARGET_HOST
value: http://PROJECT-ID.appspot.com

### Update Controller Docker Image (Optional)
### Build Locust Docker Image

The `locust-master` and `locust-worker` controllers are set to use the pre-built `locust-tasks` Docker image, which has been uploaded to the [Google Container Registry](http://gcr.io) and is available at `gcr.io/cloud-solutions-images/locust-tasks`. If you are interested in making changes and publishing a new Docker image, refer to the following steps.
To build and publish Locust contoller Docker image, refer to the following steps.

First, [install Docker](https://docs.docker.com/installation/#installation) on your platform. Once Docker is installed and you've made changes to the `Dockerfile`, you can build, tag, and upload the image using the following steps:

Expand All @@ -49,7 +49,7 @@ First, [install Docker](https://docs.docker.com/installation/#installation) on y

**Note:** you are not required to use the Google Container Registry. If you'd like to publish your images to the [Docker Hub](https://hub.docker.com) please refer to the steps in [Working with Docker Hub](https://docs.docker.com/userguide/dockerrepos/).

Once the Docker image has been rebuilt and uploaded to the registry you will need to edit the controllers with your new image location. Specifically, the `spec.template.spec.containers.image` field in each controller controls which Docker image to use.
Once the Docker image has been built and uploaded to the registry you will need to edit the deployments with your new image location. Specifically, the `spec.template.spec.containers.image` field in each deployment controls which Docker image to use.

If you uploaded your Docker image to the Google Container Registry:

Expand All @@ -63,94 +63,57 @@ If you uploaded your Docker image to the Docker Hub:

### Deploy Kubernetes Cluster

First create the [Google Container Engine](http://cloud.google.com/container-engine) cluster using the `gcloud` command as shown below.
First create the [Google kubernetes Engine](https://cloud.google.com/kubernetes-engine/) cluster using the `gcloud` command as shown below.

**Note:** This command defaults to creating a three node Kubernetes cluster (not counting the master) using the `n1-standard-1` machine type. Refer to the [`gcloud alpha container clusters create`](https://cloud.google.com/sdk/gcloud/reference/alpha/container/clusters/create) documentation information on specifying a different cluster configuration.
**Note:** This command defaults to creating a three node Kubernetes cluster (not counting the master) using the `n1-standard-1` machine type. Refer to the [`gcloud alpha container clusters create`](https://cloud.google.com/sdk/gcloud/reference/container/clusters/create) documentation information on specifying a different cluster configuration.

$ gcloud alpha container clusters create CLUSTER-NAME
$ gcloud container clusters create [CLUSTER-NAME]

After a few minutes, you'll have a working Kubernetes cluster with three nodes (not counting the Kubernetes master). Next, configure your system to use the `kubectl` command:

$ kubectl config use-context gke_PROJECT-ID_ZONE_CLUSTER-NAME
$ gcloud container clusters get-credentials [CLUSTER-NAME]

**Note:** the output from the previous `gcloud` cluster create command will contain the specific `kubectl config` command to execute for your platform/project.

### Deploy locust-master

Now that `kubectl` is setup, deploy the `locust-master-controller`:
Now that `kubectl` is setup, deploy the `kubernetes configuration`:

$ kubectl create -f locust-master-controller.yaml
$ ./deploy.sh [PROJECT_ID]

To confirm that the Replication Controller and Pod are created, run the following:
To confirm that the deployment and Pod are created, run the following:

$ kubectl get rc
$ kubectl get deployments
$ kubectl get pods -l name=locust,role=master

Next, deploy the `locust-master-service`:
This step will expose the Pod with an internal DNS name (`locust-master`) and ports `8089`, `5557`, and `5558`. As part of this step, the `type: LoadBalancer` directive in `locust-master-service.yaml` will tell Google Container Engine to create a Google Compute Engine forwarding-rule from a publicly avaialble IP address to the `locust-master` Pod. To see the the service IP address ('LoadBalancer'), issue the below command:

$ kubectl get services locust-master

$ kubectl create -f locust-master-service.yaml

This step will expose the Pod with an internal DNS name (`locust-master`) and ports `8089`, `5557`, and `5558`. As part of this step, the `type: LoadBalancer` directive in `locust-master-service.yaml` will tell Google Container Engine to create a Google Compute Engine forwarding-rule from a publicly avaialble IP address to the `locust-master` Pod. To view the newly created forwarding-rule, execute the following:

$ gcloud compute forwarding-rules list

### Deploy locust-worker

Now deploy `locust-worker-controller`:

$ kubectl create -f locust-worker-controller.yaml

The `locust-worker-controller` is set to deploy 10 `locust-worker` Pods, to confirm they were deployed run the following:
The `locust-worker-deployment` is set to deploy 10 `locust-worker` Pods, to confirm they were deployed run the following:

$ kubectl get pods -l name=locust,role=worker

To scale the number of `locust-worker` Pods, issue a replication controller `scale` command.
To scale the number of `locust-worker` Pods, issue a deployment `scale` command.

$ kubectl scale --replicas=20 replicationcontrollers locust-worker
$ kubectl scale --replicas=20 deployment locust-worker

To confirm that the Pods have launched and are ready, get the list of `locust-worker` Pods:

$ kubectl get pods -l name=locust,role=worker

**Note:** depending on the desired number of `locust-worker` Pods, the Kubernetes cluster may need to be launched with more than 3 compute engine nodes and may also need a machine type more powerful than n1-standard-1. Refer to the [`gcloud alpha container clusters create`](https://cloud.google.com/sdk/gcloud/reference/alpha/container/clusters/create) documentation for more information.

### Setup Firewall Rules

The final step in deploying these controllers and services is to allow traffic from your publicly accessible forwarding-rule IP address to the appropriate Container Engine instances.

The only traffic we need to allow externally is to the Locust web interface, running on the `locust-master` Pod at port `8089`. First, get the target tags for the nodes in your Kubernetes cluster using the output from `kubectl get nodes`:

$ kubectl get nodes
NAME LABELS STATUS
gke-ws-0e365264-node-4pdw kubernetes.io/hostname=gke-ws-0e365264-node-4pdw Ready
gke-ws-0e365264-node-jdcz kubernetes.io/hostname=gke-ws-0e365264-node-jdcz Ready
gke-ws-0e365264-node-kp3d kubernetes.io/hostname=gke-ws-0e365264-node-kp3d Ready

The target tag is the node name prefix up to `-node` and is formatted as `gke-CLUSTER-NAME-[...]-node`. For example, if your node name is `gke-mycluster-12345678-node-abcd`, the target tag would be `gke-mycluster-12345678-node`.

Now to create the firewall rule, execute the following:

$ gcloud compute firewall-rules create FIREWALL-RULE-NAME --allow=tcp:8089 --target-tags gke-CLUSTER-NAME-[...]-node
**Note:** depending on the desired number of `locust-worker` Pods, the Kubernetes cluster may need to be launched with more than 3 compute engine nodes and may also need a machine type more powerful than n1-standard-1. Refer to the [`gcloud container clusters create`](https://cloud.google.com/sdk/gcloud/reference/container/clusters/create) documentation for more information.

## Execute Tests

To execute the Locust tests, navigate to the IP address of your forwarding-rule (see above) and port `8089` and enter the number of clients to spawn and the client hatch rate then start the simulation.
To execute the Locust tests, navigate to the IP address of your locust-master-service LoadBalancer (see above) and port `8089` and enter the number of clients to spawn and the client hatch rate then start the simulation.

## Deployment Cleanup

To teardown the workload simulation cluster, use the following steps. First, delete the Kubernetes cluster:

$ gcloud alpha container clusters delete CLUSTER-NAME

Next, delete the forwarding rule that forwards traffic into the cluster.

$ gcloud compute forwarding-rules delete FORWARDING-RULE-NAME

Finally, delete the firewall rule that allows incoming traffic to the cluster.

$ gcloud compute firewall-rules delete FIREWALL-RULE-NAME
$ gcloud container clusters delete CLUSTER-NAME

To delete the sample web application, visit the [Google Cloud Console](https://console.developers.google.com).
To delete the sample web application, visit the [Google Cloud Console](https://console.cloud.google.com).

## License

Expand Down
8 changes: 8 additions & 0 deletions deploy.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
#!/bin/bash -xe

PROJECT_ID=$1

sed -i "s/\$targetHost/http:\/\/\"${PROJECT_ID}.appspot.com\"/g" kubernetes-config/locust-master-deployment.yaml
sed -i "s/\$targetHost/http:\/\/\"${PROJECT_ID}.appspot.com\"/g" kubernetes-config/locust-worker-deployment.yaml

kubectl apply -f kubernetes-config
2 changes: 1 addition & 1 deletion docker-image/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -34,4 +34,4 @@ EXPOSE 5557 5558 8089
RUN chmod 755 /locust-tasks/run.sh

# Start Locust using LOCUS_OPTS environment variable
ENTRYPOINT ["/locust-tasks/run.sh"]
CMD ["/bin/bash", "-c", "/locust-tasks/run.sh"]
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,8 @@
# limitations under the License.


kind: ReplicationController
apiVersion: v1
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1. For version 1.9 use apps/v1
kind: Deployment
metadata:
name: locust-master
labels:
Expand All @@ -23,8 +23,9 @@ metadata:
spec:
replicas: 1
selector:
name: locust
role: master
matchLabels:
name: locust
role: master
template:
metadata:
labels:
Expand All @@ -34,13 +35,13 @@ spec:
containers:
- name: locust
image: gcr.io/cloud-solutions-images/locust-tasks:latest
command: [ "/bin/bash", "-c", "--" ]
args: [ "/locust-tasks/run.sh" ]
env:
- name: LOCUST_MODE
key: LOCUST_MODE
value: master
- name: TARGET_HOST
key: TARGET_HOST
value: http://workload-simulation-webapp.appspot.com
value: $targetHost
ports:
- name: loc-master-web
containerPort: 8089
Expand All @@ -50,4 +51,4 @@ spec:
protocol: TCP
- name: loc-master-p2
containerPort: 5558
protocol: TCP
protocol: TCP
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,8 @@
# limitations under the License.


kind: ReplicationController
apiVersion: v1
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1. For version 1.9 use apps/v1
kind: Deployment
metadata:
name: locust-worker
labels:
Expand All @@ -23,8 +23,9 @@ metadata:
spec:
replicas: 10
selector:
name: locust
role: worker
matchLabels:
name: locust
role: worker
template:
metadata:
labels:
Expand All @@ -34,13 +35,12 @@ spec:
containers:
- name: locust
image: gcr.io/cloud-solutions-images/locust-tasks:latest
command: [ "/bin/bash", "-c", "--" ]
args: [ "/locust-tasks/run.sh" ]
env:
- name: LOCUST_MODE
key: LOCUST_MODE
value: worker
- name: LOCUST_MASTER
key: LOCUST_MASTER
value: locust-master
- name: TARGET_HOST
key: TARGET_HOST
value: http://workload-simulation-webapp.appspot.com
value: $targetHost

0 comments on commit c5661c8

Please sign in to comment.