Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

port to kubernetes 1.9 #24

Open
wants to merge 6 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
92 changes: 34 additions & 58 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,9 +37,9 @@ Before deploying the `locust-master` and `locust-worker` controllers, update eac
- name: TARGET_HOST
value: http://PROJECT-ID.appspot.com

### Update Controller Docker Image (Optional)
### Build Locust Docker Image

The `locust-master` and `locust-worker` controllers are set to use the pre-built `locust-tasks` Docker image, which has been uploaded to the [Google Container Registry](http://gcr.io) and is available at `gcr.io/cloud-solutions-images/locust-tasks`. If you are interested in making changes and publishing a new Docker image, refer to the following steps.
To build and publish Locust contoller Docker image, refer to the following steps.

First, [install Docker](https://docs.docker.com/installation/#installation) on your platform. Once Docker is installed and you've made changes to the `Dockerfile`, you can build, tag, and upload the image using the following steps:

Expand All @@ -49,7 +49,7 @@ First, [install Docker](https://docs.docker.com/installation/#installation) on y

**Note:** you are not required to use the Google Container Registry. If you'd like to publish your images to the [Docker Hub](https://hub.docker.com) please refer to the steps in [Working with Docker Hub](https://docs.docker.com/userguide/dockerrepos/).

Once the Docker image has been rebuilt and uploaded to the registry you will need to edit the controllers with your new image location. Specifically, the `spec.template.spec.containers.image` field in each controller controls which Docker image to use.
Once the Docker image has been built and uploaded to the registry you will need to edit the deployments with your new image location. Specifically, the `spec.template.spec.containers.image` field in each deployment controls which Docker image to use.

If you uploaded your Docker image to the Google Container Registry:

Expand All @@ -61,96 +61,72 @@ If you uploaded your Docker image to the Docker Hub:

**Note:** the image location includes the `latest` tag so that the image is pulled down every time a new Pod is launched. To use a Kubernetes-cached copy of the image, remove `:latest` from the image location.

Please note that image `gcr.io/cloud-solutions-images/locust-tasks:latest` isn't available anymore, so you may need to create your own image.

### Deploy Kubernetes Cluster

First create the [Google Container Engine](http://cloud.google.com/container-engine) cluster using the `gcloud` command as shown below.
First create the [Google kubernetes Engine](https://cloud.google.com/kubernetes-engine/) cluster using the `gcloud` command as shown below.

**Note:** This command defaults to creating a three node Kubernetes cluster (not counting the master) using the `n1-standard-1` machine type. Refer to the [`gcloud alpha container clusters create`](https://cloud.google.com/sdk/gcloud/reference/alpha/container/clusters/create) documentation information on specifying a different cluster configuration.
**Note:** This command defaults to creating a three node Kubernetes cluster (not counting the master) using the `n1-standard-1` machine type. Refer to the [`gcloud alpha container clusters create`](https://cloud.google.com/sdk/gcloud/reference/container/clusters/create) documentation information on specifying a different cluster configuration.

$ gcloud alpha container clusters create CLUSTER-NAME
$ gcloud container clusters create [CLUSTER-NAME]

After a few minutes, you'll have a working Kubernetes cluster with three nodes (not counting the Kubernetes master). Next, configure your system to use the `kubectl` command:

$ kubectl config use-context gke_PROJECT-ID_ZONE_CLUSTER-NAME
$ gcloud container clusters get-credentials [CLUSTER-NAME]

**Note:** the output from the previous `gcloud` cluster create command will contain the specific `kubectl config` command to execute for your platform/project.

### Deploy locust-master

Now that `kubectl` is setup, deploy the `locust-master-controller`:

$ kubectl create -f locust-master-controller.yaml

To confirm that the Replication Controller and Pod are created, run the following:

$ kubectl get rc
$ kubectl get pods -l name=locust,role=master

Next, deploy the `locust-master-service`:

$ kubectl create -f locust-master-service.yaml
Now that `kubectl` is setup, deploy the `k8s`.

This step will expose the Pod with an internal DNS name (`locust-master`) and ports `8089`, `5557`, and `5558`. As part of this step, the `type: LoadBalancer` directive in `locust-master-service.yaml` will tell Google Container Engine to create a Google Compute Engine forwarding-rule from a publicly avaialble IP address to the `locust-master` Pod. To view the newly created forwarding-rule, execute the following:
please add tasks.py on config folder and create the following configmap:

$ gcloud compute forwarding-rules list
```
kubectl create configmap locust-tasks-configuration --from-file=config/tasks.py --namespace load-test
# image name will always the same
# just change the url
python substitute.py --project-id <project-name> --image-name locust-tasks --image-tag <image-tag> --target-url <host>
kubectl apply -f k8s/environment-variable.yaml
kubectl apply -f k8s/locust-master-deployment.yaml
kubectl apply -f k8s/locust-worker-deployment.yaml
kubectl apply -f k8s/locust-master-service.yaml
```

### Deploy locust-worker
To confirm that the deployment and Pod are created, run the following:

Now deploy `locust-worker-controller`:
$ kubectl get deployments
$ kubectl get pods -l name=locust,role=master

$ kubectl create -f locust-worker-controller.yaml
This step will expose the Pod with an internal DNS name (`locust-master`) and ports `8089`, `5557`, and `5558`. As part of this step, the `type: LoadBalancer` directive in `locust-master-service.yaml` will tell Google Container Engine to create a Google Compute Engine forwarding-rule from a publicly avaialble IP address to the `locust-master` Pod. To see the the service IP address ('LoadBalancer'), issue the below command:

$ kubectl get services locust-master

The `locust-worker-controller` is set to deploy 10 `locust-worker` Pods, to confirm they were deployed run the following:
The `locust-worker-deployment` is set to deploy 10 `locust-worker` Pods, to confirm they were deployed run the following:

$ kubectl get pods -l name=locust,role=worker

To scale the number of `locust-worker` Pods, issue a replication controller `scale` command.
To scale the number of `locust-worker` Pods, issue a deployment `scale` command.

$ kubectl scale --replicas=20 replicationcontrollers locust-worker
$ kubectl scale --replicas=20 deployment locust-worker

To confirm that the Pods have launched and are ready, get the list of `locust-worker` Pods:

$ kubectl get pods -l name=locust,role=worker

**Note:** depending on the desired number of `locust-worker` Pods, the Kubernetes cluster may need to be launched with more than 3 compute engine nodes and may also need a machine type more powerful than n1-standard-1. Refer to the [`gcloud alpha container clusters create`](https://cloud.google.com/sdk/gcloud/reference/alpha/container/clusters/create) documentation for more information.

### Setup Firewall Rules

The final step in deploying these controllers and services is to allow traffic from your publicly accessible forwarding-rule IP address to the appropriate Container Engine instances.

The only traffic we need to allow externally is to the Locust web interface, running on the `locust-master` Pod at port `8089`. First, get the target tags for the nodes in your Kubernetes cluster using the output from `kubectl get nodes`:

$ kubectl get nodes
NAME LABELS STATUS
gke-ws-0e365264-node-4pdw kubernetes.io/hostname=gke-ws-0e365264-node-4pdw Ready
gke-ws-0e365264-node-jdcz kubernetes.io/hostname=gke-ws-0e365264-node-jdcz Ready
gke-ws-0e365264-node-kp3d kubernetes.io/hostname=gke-ws-0e365264-node-kp3d Ready

The target tag is the node name prefix up to `-node` and is formatted as `gke-CLUSTER-NAME-[...]-node`. For example, if your node name is `gke-mycluster-12345678-node-abcd`, the target tag would be `gke-mycluster-12345678-node`.

Now to create the firewall rule, execute the following:

$ gcloud compute firewall-rules create FIREWALL-RULE-NAME --allow=tcp:8089 --target-tags gke-CLUSTER-NAME-[...]-node
**Note:** depending on the desired number of `locust-worker` Pods, the Kubernetes cluster may need to be launched with more than 3 compute engine nodes and may also need a machine type more powerful than n1-standard-1. Refer to the [`gcloud container clusters create`](https://cloud.google.com/sdk/gcloud/reference/container/clusters/create) documentation for more information.

## Execute Tests

To execute the Locust tests, navigate to the IP address of your forwarding-rule (see above) and port `8089` and enter the number of clients to spawn and the client hatch rate then start the simulation.
To execute the Locust tests, navigate to the IP address of your locust-master-service LoadBalancer (see above) and port `8089` and enter the number of clients to spawn and the client hatch rate then start the simulation.

## Deployment Cleanup

To teardown the workload simulation cluster, use the following steps. First, delete the Kubernetes cluster:

$ gcloud alpha container clusters delete CLUSTER-NAME

Next, delete the forwarding rule that forwards traffic into the cluster.

$ gcloud compute forwarding-rules delete FORWARDING-RULE-NAME

Finally, delete the firewall rule that allows incoming traffic to the cluster.

$ gcloud compute firewall-rules delete FIREWALL-RULE-NAME
$ gcloud container clusters delete CLUSTER-NAME

To delete the sample web application, visit the [Google Cloud Console](https://console.developers.google.com).
To delete the sample web application, visit the [Google Cloud Console](https://console.cloud.google.com).

## License

Expand Down
File renamed without changes.
16 changes: 16 additions & 0 deletions deploy.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
#!/bin/bash -xe

IMAGE_NAME=$1
TARGET_HOST=$2

if [[ -z "${IMAGE_NAME}" && -z "${TARGET_HOST}" ]]; then
exit 1
fi

sed -i s/\$targetHost/${TARGET_HOST}/g kubernetes-config/environment-variable.yaml
sed -i s/\$imageName/${IMAGE_NAME}/g kubernetes-config/locust-master-deployment.yaml
sed -i s/\$imageName/${IMAGE_NAME}/g kubernetes-config/locust-worker-deployment.yaml

kubectl apply -f kubernetes-config --dry-run
kubectl create configmap locust-tasks-configuration --from-file=config/tasks.py
kubectl apply -f kubernetes-config
2 changes: 1 addition & 1 deletion docker-image/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -34,4 +34,4 @@ EXPOSE 5557 5558 8089
RUN chmod 755 /locust-tasks/run.sh

# Start Locust using LOCUS_OPTS environment variable
ENTRYPOINT ["/locust-tasks/run.sh"]
CMD ["/bin/bash", "-c", "/locust-tasks/run.sh"]
12 changes: 1 addition & 11 deletions docker-image/locust-tasks/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,11 +1 @@
Flask==0.10.1
gevent==1.0.1
greenlet==0.4.5
itsdangerous==0.24
Jinja2==2.7.3
locustio==0.7.2
MarkupSafe==0.23
msgpack-python==0.4.6
pyzmq==14.5.0
requests==2.6.2
Werkzeug==0.10.4
locustio==0.8.1
2 changes: 1 addition & 1 deletion docker-image/locust-tasks/run.sh
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@


LOCUST="/usr/local/bin/locust"
LOCUS_OPTS="-f /locust-tasks/tasks.py --host=$TARGET_HOST"
LOCUS_OPTS="-f /locust-tasks/script/tasks.py --host=$TARGET_HOST"
LOCUST_MODE=${LOCUST_MODE:-standalone}

if [[ "$LOCUST_MODE" = "master" ]]; then
Expand Down
7 changes: 7 additions & 0 deletions k8s/environment-variable.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: environment-variable
namespace: load-test
data:
TARGET_HOST: $targetUrl
Original file line number Diff line number Diff line change
Expand Up @@ -13,18 +13,20 @@
# limitations under the License.


kind: ReplicationController
apiVersion: v1
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1. For version 1.9 use apps/v1
kind: Deployment
metadata:
name: locust-master
namespace: load-test
labels:
name: locust
role: master
spec:
replicas: 1
selector:
name: locust
role: master
matchLabels:
name: locust
role: master
template:
metadata:
labels:
Expand All @@ -33,14 +35,15 @@ spec:
spec:
containers:
- name: locust
image: gcr.io/cloud-solutions-images/locust-tasks:latest
image: $appImage
command: [ "/bin/bash", "-c", "--" ]
args: [ "/locust-tasks/run.sh" ]
env:
- name: LOCUST_MODE
key: LOCUST_MODE
value: master
- name: TARGET_HOST
key: TARGET_HOST
value: http://workload-simulation-webapp.appspot.com
envFrom:
- configMapRef:
name: environment-variable
ports:
- name: loc-master-web
containerPort: 8089
Expand All @@ -51,3 +54,10 @@ spec:
- name: loc-master-p2
containerPort: 5558
protocol: TCP
volumeMounts:
- name: locust-tasks-configuration
mountPath: /locust-tasks/script
volumes:
- name: locust-tasks-configuration
configMap:
name: locust-tasks-configuration
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ kind: Service
apiVersion: v1
metadata:
name: locust-master
namespace: load-test
labels:
name: locust
role: master
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,18 +13,20 @@
# limitations under the License.


kind: ReplicationController
apiVersion: v1
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1. For version 1.9 use apps/v1
kind: Deployment
metadata:
name: locust-worker
namespace: load-test
labels:
name: locust
role: worker
spec:
replicas: 10
selector:
name: locust
role: worker
matchLabels:
name: locust
role: worker
template:
metadata:
labels:
Expand All @@ -33,14 +35,21 @@ spec:
spec:
containers:
- name: locust
image: gcr.io/cloud-solutions-images/locust-tasks:latest
image: $appImage
command: [ "/bin/bash", "-c", "--" ]
args: [ "/locust-tasks/run.sh" ]
env:
- name: LOCUST_MODE
key: LOCUST_MODE
value: worker
- name: LOCUST_MASTER
key: LOCUST_MASTER
value: locust-master
- name: TARGET_HOST
key: TARGET_HOST
value: http://workload-simulation-webapp.appspot.com
envFrom:
- configMapRef:
name: environment-variable
volumeMounts:
- name: locust-tasks-configuration
mountPath: /locust-tasks/script
volumes:
- name: locust-tasks-configuration
configMap:
name: locust-tasks-configuration
38 changes: 38 additions & 0 deletions substitute.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
import argparse


def replace_in_file(filename, target, replacement):
with open(filename) as f:
lines = [line.rstrip() for line in f.readlines()]

with open(filename, 'w+') as f:
for line in lines:
f.write(line.replace(target, replacement) + '\n')

def change_target_url(target_url):
replace_in_file('k8s/environment-variable.yaml', '$targetUrl',
target_url)

def change_image(project_id, image_name, image_tag):
replace_in_file('k8s/locust-master-deployment.yaml', '$appImage',
'gcr.io/{project_id}/{image_name}:{image_tag}'.format(
project_id=project_id,
image_name=image_name,
image_tag=image_tag))
replace_in_file('k8s/locust-worker-deployment.yaml', '$appImage',
'gcr.io/{project_id}/{image_name}:{image_tag}'.format(
project_id=project_id,
image_name=image_name,
image_tag=image_tag))

if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument("--project-id")
parser.add_argument("--image-name")
parser.add_argument("--image-tag")
parser.add_argument("--target-url")
result = parser.parse_args()

change_image(result.project_id, result.image_name,
result.image_tag)
change_target_url(result.target_url)