Skip to content

fromcj/docker-study-group-labs

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

86 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Overview

This document contains the steps that were covered in the Docker 101 labs. Useful if you missed a class or forgot a step.

Prerequisites

  • An AWS account
  • An SSH client, OpenSSH preferred
  • Basic Linux knowledge preferred

Tmux Cheat Sheet

  • ctrl-b C -- create new window
  • ctrl-b n -- jump to next window
  • ctrl-b p -- jump to previous window
  • ctrl-b 0 -- jump to 0th window
  • ctrl-b "" -- split window horizontally (pane)
  • ctrl-b O -- jump to next pane
  • ctrl-b x -- close pane

Lab 1: Installation

Ubuntu Linux (manual)

  1. Spin up a small EC2 instance using the Ubuntu Linux AMI
  2. uname -a to verify we are running a Linux 3.1.0 or higher kernel
  3. sudo apt-get install apt-transport-https ca-certificates curl software-properties-common
  4. curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
  5. sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
  6. sudo apt-get update
  7. sudo apt-get install docker-ce
  8. sudo docker info
  9. destroy the instance

Ubuntu Linux (automated)

  1. Spin up a small EC2 instance using the Ubuntu Linux AMI
  2. uname -a to verify we are running a Linux 3.1.0 or higher kernel
  3. sudo apt-get update
  4. whereis curl
  5. sudo apt-get install curl
  6. curl https://get.docker.com/ | sudo sh
  7. sudo docker info
  8. docker info
  9. sudo usermod -aG docker ubuntu
  10. docker info
  11. log out and back in again
  12. docker info
  13. save the instance (we'll be using it in future labs)

Lab 2: Kicking The Tires

  1. Spin up a Ubuntu EC2 instance with Docker installed
  2. sudo status docker
  3. status docker
  4. cat /etc/group
  5. docker info
  6. sudo service docker stop
  7. sudo service docker start
  8. docker help
  9. man docker
  10. docker help run
  11. docker run --help
  12. docker run --interactive --tty ubuntu /bin/bash
  13. start a second ssh session to your EC2 instance (tmux can simplify this)
  14. compare results of commands from inside Docker and on the EC2 instance
  15. whoami
  16. hostname
  17. cat /etc/hosts
  18. hostname --all-ip-addresses
  19. ps -aux
  20. uname -a
  21. top
  22. ls /bin
  23. sudo find / -type d | wc --lines -- no sudo needed on the Docker side
  24. cat /proc/cpuinfo
  25. cat /proc/meminfo
  26. cat /proc/net/dev
  27. in your Docker container, apt-get update; apt-get install vim
  28. in your Docker container, exit

Lab 3: Manipulating Containers

. docker ps -- show running containers

  1. docker ps --all -- show all containers
  2. docker ps --latest -- show the last running container
  3. docker run --name wolverine --interactive --tty ubuntu /bin/bash
  4. exit the container
  5. docker ps --latest -- notice the container name
  6. docker start wolverine -- start stopped container
  7. docker ps -- should see the wolverine container running
  8. docker attach <container id> -- see how few characters you can get away with
  9. exit to stop the container

Lab 4: Manipulating Containers (continued)

  1. docker run --detach --name nightcrawler ubuntu /bin/sh -c "while true; do echo hello world; sleep 2; done"
  2. docker logs --follow --timestamps nightcrawler
  3. docker top nightcrawler
  4. docker stats nightcrawler
  5. docker exec --detach nightcrawler touch /etc/new_config_file
  6. docker exec --interactive --tty nightcrawler /bin/bash
  7. ls -alh /etc/new_config_file -- should see the file added previously
  8. exit
  9. docker stop nightcrawler
  10. docker ps -a -- container still exists but is not running
  11. docker run --restart=always --detach --name banshee ubuntu /bin/sh -c "sleep 2; exit 1964"
  12. watch docker ps -- notice how the container keeps restarting after the "failure"
  13. docker stop banshee
  14. docker run --restart=on-failure:5 --detach --name colossus ubuntu /bin/sh -c "sleep 5; exit 1964"
  15. watch docker ps -- notice how Docker gives up restarting after 5 tries
  16. docker inspect colossus
  17. docker inspect --format='{{ .State.Running }}' colossus
  18. docker stop colossus
  19. docker rm colossus
  20. clean up the remaining containers on your own. Try using id and names.
  21. docker rm --volumes --force $(docker ps --all --quiet) -- shell magic to nuke all containers

Lab 5: Docker Repository

  1. visit https://hub.docker.com/
  2. create an account (we'll use it in later labs)
  3. click the Explore link
  4. browse through the images marked as official
  5. poke around the following images looking for differences between them
  6. docker run --interactive --tty alpine /bin/bash
  7. docker run --interactive --tty centos /bin/bash
  8. docker run --interactive --tty amazonlinux /bin/bash
  9. docker run --interactive --tty bash /bin/bash
  10. docker run --interactive --tty clearlinux /bin/bash

Lab 6: Docker Images

  1. docker images
  2. docker run --interactive --tty ubuntu:16.04
  3. docker run --interactive --tty ubuntu:14.04
  4. docker run --interactive --tty ubuntu:latest
  5. docker pull ubuntu:12.04
  6. docker images
  7. docker search python
  8. docker search kurron
  9. docker run --interactive --tty kurron/docker-azul-jdk-8-build /bin/bash
  10. java -version
  11. ansible --version
  12. docker --version
  13. exit
  14. docker images
  15. docker rmi --force $(docker images --quiet)
  16. docker images

Lab 7: Creating Docker Images (the hard way)

  1. docker run --interactive --tty ubuntu:latest
  2. apt-get update
  3. apt-get install apache2
  4. exit
  5. LAST=$(docker ps --latest --quiet)
  6. echo ${LAST}
  7. docker commit ${LAST} kurron/apache2 <--- use your own repository account
  8. docker images kurron/apache2
  9. docker commit --message "Created by hand" --author "Ron Kurr kurron@jvmguy.com" ${LAST} kurron/apache2:by-hand
  10. docker inspect kurron/apache2:by-hand
  11. docker run --interactive --tty kurron/apache2:by-hand
  12. service apache2 status

Lab 8: Creating Docker Images (an easier way)

  1. git clone https://github.com/kurron/docker-study-group-labs.git
  2. cd docker-study-group-labs/solutions/lab-07
  3. docker build --tag="kurron/static_web:v1.0.0" .
  4. docker images
  5. docker build --file Dockerfile.broken .
  6. docker build --no-cache --tag="kurron/static_web:v1.0.0" .
  7. docker history 85098924c514 <--- your image id will be different

Lab 9: Fun With Port Bindings

  1. docker run --detach --publish 80 --name domino kurron/static_web:v1.0.0 nginx -g "daemon off;"
  2. docker ps --latest
  3. docker port cb888707fcba
  4. docker port domino 80
  5. docker run --detach --publish 80:80 --name domino kurron/static_web:v1.0.0 nginx -g "daemon off;"
  6. docker run --detach --publish 8080:80 --name domino kurron/static_web:v1.0.0 nginx -g "daemon off;"
  7. docker run --detach --publish 127.0.0.1:8080:80 --name domino kurron/static_web:v1.0.0 nginx -g "daemon off;"
  8. docker run --detach --publish 127.0.0.1::80 --name domino kurron/static_web:v1.0.0 nginx -g "daemon off;"
  9. stop and remove all containers
  10. docker run --detach --publish-all --name domino kurron/static_web:v1.0.0 nginx -g "daemon off;"
  11. docker port domino
  12. curl localhost:32769 <--- your port will be different

Lab 10: Dockerfile Madness

  1. cd solutions/lab-10
  2. docker build --tag="kurron/dockerfile-example:v1.0.0" .
  3. docker run --interactive --tty kurron/dockerfile-example:v1.0.0
  4. docker run --interactive --tty kurron/dockerfile-example:v1.0.0 -alh /opt
  5. docker run --interactive --tty kurron/dockerfile-example:v1.0.0 -alh /opt/wordpress
  6. docker images
  7. docker inspect kurron/dockerfile-example:v1.0.0

Lab 11: Publishing Docker Images

  1. cd labs/lab-11
  2. edit the Dockerfile and local.txt files, using your personal settings
  3. docker build --tag="kurron/publish-example:v1.0.0" . <--- use your own account
  4. docker run --interactive --tty kurron/publish-example:v1.0.0
  5. docker push kurron/publish-example:v1.0.0
  6. docker login
  7. docker push kurron/publish-example:v1.0.0
  8. visit https://hub.docker.com/ and find your image
  9. run somebody else's image, illustrating how image sharing works

Lab 12: Automated Image Building

This is difficult to explain in text so try and be in class for this one.

  1. Create a GitHub account if you don't already have one
  2. Create a new repository using the labs/lab-11 folder as source
  3. Log into your Docker Hub account
  4. click Create -> Create Automated Build
  5. Add Repository
  6. Add your GitHub account
  7. Select your repository
  8. Create to create the build project
  9. Verify that the build was successful
  10. Checkout your GitHub project
  11. Make an edit to your local.txt
  12. git commit -am 'Ron made me do this'
  13. git push
  14. In the Docker Hub console, make sure your Docker build gets triggered
  15. Pull down the latest image and run it, ensuring your changes show up

Lab 13: Simple Volume Mount

  1. cd docker-study-group-labs
  2. git reset --hard <--- will nuke any local changes you may have made
  3. git pull to get the current bits
  4. cd solutions/lab-12
  5. docker build --tag="kurron/mount-example:v1.0.0" . <--- use your own account
  6. docker history kurron/mount-example:v1.0.0
  7. docker run --detach --publish 80 --name mystique --volume ${PWD}/website:/var/www/html/website:ro kurron/mount-example:v1.0.0 nginx
  8. docker port mystique
  9. curl --silent localhost:32771 <--- your port will be different
  10. edit website/index.html
  11. curl --silent localhost:32771
  12. determine the public address of your EC2 instance
  13. open your web browser to http://ec2-instance-address:32771/
  14. Tip: Volumes can also be shared between containers and can persist even when containers are stopped
  15. Tip: If the container directory doesn't exist Docker will create it.

Lab 14: Networking (single host setup)

  1. stop any running containers
  2. docker rm --volumes --force $(docker ps --all --quiet)
  3. docker rmi --force $(docker images --quiet)
  4. docker run --name thor --detach --publish-all redis:latest
  5. docker port thor
  6. sudo apt-get install redis-tools
  7. redis-cli -h 127.0.0.1 -p 32769 ping <--- use your own port
  8. docker run --name sif --interactive --tty --rm redis:latest redis-cli ping <-- will fail with a connection error

Lab 15: Networking (Docker Internal Networking)

  1. Every Docker container is assigned an IP address, provided through an interface created when we installed Docker. That interface is called docker0.
  2. ip a show docker0 (you may have to install the iproute2 package)
  3. The docker0 interface is a virtual Ethernet bridge that connects our containers and the local host network.
  4. ip a show -- for every container there is a veth interface
  5. docker run --interactive --tty --rm ubuntu:latest bash
  6. apt-get update && apt-get install iproute2 traceroute
  7. ip a show eth0 -- we can see the EC2-side ip address of the container
  8. traceroute google.com -- notice how we go through the docker0 ip address?
  9. exit
  10. sudo iptables --table nat --list --numeric -- this is just to underscore that NAT is happening
  11. docker inspect thor or docker inspect --format '{{ .NetworkSettings.IPAddress }}' thor to get the ip address
  12. redis-cli -h 172.17.0.2 ping <--- use your own ip, notice we no longer have to specify a port
  13. DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:32769 to:172.17.0.2:6379 show the NATing going on
  14. docker inspect --format '{{ .NetworkSettings.IPAddress }}' thor -- remember this value
  15. docker restart thor
  16. docker inspect --format '{{ .NetworkSettings.IPAddress }}' thor -- address can change on you
  17. TIP: hard coding addresses and the fact that addresses can change make internal networking difficult to use in production

Lab 16: Networking (Docker Networking)

  1. stop any running containers
  2. docker rm --volumes --force $(docker ps --all --quiet)
  3. docker rmi --force $(docker images --quiet)
  4. docker network create asgard
  5. docker network inspect asgard
  6. TIP: in addition to bridge networks, which exist on a single host, we can also create overlay networks, which allow us to span multiple hosts.
  7. docker network ls
  8. TIP: docker network rm will remove a network
  9. docker run --name thor --net asgard --detach --publish-all redis:latest
  10. docker network inspect asgard -- notice how thor is now a member of the network?
  11. docker run --name sif --net asgard --interactive --tty --rm redis:latest redis-cli -h thor ping <-- this now works!
  12. docker run --name heimdall --net asgard --interactive --tty --rm ubuntu:latest /bin/bash
  13. apt-get update && apt-get install dnsutils iputils-ping
  14. nslookup thor
  15. ping thor.asgard -- the network name becomes the domain name
  16. ctrl-c then exit
  17. docker run --name hogun --detach --publish-all redis:latest
  18. docker network inspect asgard -- notice how hogun is not a member of the network?
  19. docker network connect asgard hogun
  20. docker network inspect asgard -- notice how hogun is now a member of the network?
  21. recreate heimdall and use him to see hogun
  22. docker network disconnect asgard hogun to remove hogun from the network

Lab 17: Docker Inside Docker...Whaaaaat?

  1. cd solutions/lab-17
  2. ./clean-slate-protocol.sh
  3. cat Dockerfile
  4. docker build --tag="kurron/docker-in-docker:latest" .
  5. docker run --rm kurron/docker-in-docker:latest
  6. docker run --interactive --tty --rm --workdir /work-area --volume ${PWD}:/work-area:ro kurron/docker-in-docker:latest bash
  7. docker build --tag="kurron/docker-in-docker:latest" . <--- why does this fail?
  8. exit
  9. docker run --interactive --tty --rm --workdir /work-area --volume /var/run/docker.sock:/var/run/docker.sock --volume ${PWD}/Dockerfile-CentOS:/work-area/Dockerfile:ro kurron/docker-in-docker:latest bash
  10. cat Dockerfile
  11. docker build --tag="kurron/docker-in-docker:CentOS" .
  12. docker images <-- notice the newly build CentOS images
  13. docker run --rm kurron/docker-in-docker:CentOS
  14. docker run --interactive --tty --rm --cidfile=/tmp/containerid.txt kurron/docker-in-docker:CentOS bash <-- you just started a Docker container from within a Docker container!
  15. exit <-- leave the CentOS container
  16. cat /tmp/containerid.txt <-- holds the id of the container we just exited
  17. exit <-- leave the docker-in-docker container
  18. Tip: use docker wait <container id> to wait for a long running container to exit, obtaining its exit code
  19. Tip: Drone is a Docker-in-Docker build engine
  20. Tip: Shippable is a CI/CD SaaS that supports Docker

Lab 18: Application Configuration, Docker Style

  1. cd solutions/lab-18
  2. ./clean-slate-protocol.sh
  3. docker run --interactive --tty --rm --workdir /work-area --volume ${PWD}/config.ini:/work-area/config.ini:ro ubuntu:latest bash
  4. cat config.ini
  5. exit
  6. docker run --interactive --tty --rm --env username=logan --env password=Weapon-X ubuntu:latest bash
  7. env | sort
  8. exit
  9. docker run --interactive --tty --rm --env-file config.ini ubuntu:latest bash
  10. env | sort
  11. exit

Lab 19: More Fun With Volumes

  1. cd solutions/lab-19
  2. ./clean-slate-protocol.sh
  3. cat nginx/Dockerfile
  4. docker build --tag="study-group/nginx:latest" nginx
  5. cat new-england/Dockerfile
  6. docker build --tag="study-group/new-england:latest" new-england
  7. cat miami/Dockerfile
  8. docker build --tag="study-group/miami:latest" miami
  9. docker images
  10. docker run --name new-england study-group/new-england:latest
  11. docker run --name miami study-group/miami:latest
  12. docker run --name superbowl --detach --publish-all --volumes-from new-england:ro study-group/nginx:latest nginx
  13. docker ps <-- notice how only the superbowl container is running
  14. IP=$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' superbowl)
  15. curl --silent ${IP}
  16. docker stop superbowl
  17. docker rm superbowl
  18. docker run --name superbowl --detach --publish-all --volumes-from miami:ro study-group/nginx:latest nginx
  19. IP=$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' superbowl)
  20. curl --silent ${IP}
  21. docker inspect -f "{{ range .Mounts }}{{.}}{{end}}" superbowl
  22. docker inspect -f "{{ range .Mounts }}{{.}}{{end}}" miami
  23. docker inspect -f "{{ range .Mounts }}{{.}}{{end}}" new-england
  24. use ls and cat to poke around those folders (need to use sudo)
  25. docker run --rm --volumes-from new-england:ro --volume $(pwd):/backup:rw ubuntu tar cvf /backup/backup.tar /var/www/html
  26. tar --list --verbose --file backup.tar
  27. Volume Bullet Points
  • Volumes can be shared and reused between containers
  • A container doesn't have to be running to share its volumes
  • Changes to a volume are made directly
  • Changes to a volume will not be included when you update an image
  • Volumes persist even when no containers use them

Lab 20: Docker Compose

  1. cd solutions/lab-20
  2. ./clean-slate-protocol.sh
  3. cat install-docker-compose.sh
  4. ./install-docker-compose.sh
  5. less docker-compose.yml
  6. docker-compose config
  7. docker-compose pull
  8. docker-compose images
  9. docker-compose up -d
  10. docker-compose ps
  11. docker-compose top
  12. docker-compose logs
  13. docker volume ls
  14. docker volume inspect lab20_mongodb-data
  15. docker-compose up -d
  16. docker-compose ps
  17. docker-compose port showcase 8080
  18. curl --silent localhost:32811/operations/health | python -m json.tool <--- use the correct port
  19. curl --silent localhost:32811/operations/info | python -m json.tool <--- use the correct port
  20. docker-compose down --rmi all --volumes --remove-orphans

Lab 21: Docker Machine

Docker Machine is a tool that lets you install Docker Engine on virtual hosts, and manage the hosts with docker-machine commands. You can use Machine to create Docker hosts on your local Mac or Windows box, on your company network, in your data center, or on cloud providers like Azure, AWS, or Digital Ocean.

  1. git reset --hard
  2. cd solutions/lab-21
  3. ./clean-slate-protocol.sh
  4. ./install-docker-machine.sh
  5. docker-machine --help
  6. docker-machine create --help
  7. cat fix-ssh-key-permissions.sh
  8. ./fix-ssh-key-permissions.sh
  9. cat create-docker-machine.sh
  10. cat create-alpha-machine.sh
  11. ./create-alpha-machine.sh us-east-1 a
  12. ./create-bravo-machine.sh us-east-1 a
  13. ./create-charlie-machine.sh us-east-1 a
  14. ./create-delta-machine.sh us-east-1 a
  15. ./create-echo-machine.sh us-east-1 a
  16. cat fix-docker-permissions.sh
  17. ./fix-docker-permissions.sh <alpha ip>
  18. ./fix-docker-permissions.sh <bravo ip>
  19. ./fix-docker-permissions.sh <charlie ip>
  20. ./fix-docker-permissions.sh <delta ip>
  21. ./fix-docker-permissions.sh <echo ip>
  22. docker-machine ls
  23. docker-machine inspect alpha
  24. docker-machine ip bravo
  25. docker-machine ssh charlie hostname
  26. docker-machine scp fix-ssh-key-permissions.sh delta:/tmp
  27. docker-machine url echo
  28. docker-machine ls
  29. docker-machine stop charlie
  30. docker-machine ls
  31. examine the EC2 console to see the status of the charlie instance
  32. docker-machine start charlie
  33. docker-machine ls <-- what is it complaining about and why?
  34. docker-machine regenerate-certs charlie
  35. docker-machine ls
  36. docker-machine status delta
  37. docker-machine restart echo
  38. docker-machine ls
  39. docker-machine version alpha
  40. docker-machine upgrade alpha
  41. docker-machine rm alpha
  42. See if alpha is in the EC2 console
  43. docker-machine stop bravo charlie delta echo

Lab 22: Docker Swarm (creation)

A swarm is a cluster of Docker Engines where you deploy services. The Docker Engine CLI includes the commands for swarm management, such as adding and removing nodes. The CLI also includes the commands you need to deploy services to the swarm and manage service orchestration. A node is an instance of the Docker Engine participating in the swarm.

  1. git reset --hard
  2. cd solutions/lab-22
  3. ./clean-slate-protocol.sh
  4. docker-machine start bravo charlie delta echo
  5. docker-machine ls
  6. docker-machine regenerate-certs bravo charlie delta echo
  7. docker-machine ls
  8. In the EC2 console, adjust the security group to allow ingress on ports 22, 2376, 2377, 3376, 7946 (tcp and udp), 4789(udp)
  9. cat create-swarm.sh
  10. ./create-swarm.sh

Lab 23: Docker Swarm (network creation)

As of Docker 1.9.0, the ability to create a network specific to a set of containers was added. There are couple forms of Docker networking but we'll be focusing on overlay networking, aka multi-host networking. Based on Virtual Extensible LAN (VXLAN) technology, an overlay network gives each container participating in the network its own ip address. The address is only routable to containers participating in the network. Since each container gets its own address, you won't get port collisions and you don't have to play the "port mapping game" to find an open port to bind your service to. Containers can participate in multiple networks so you have the ability to segregate parts of your architecture and still route traffic to where it needs to go. Finally, networks created on the Swarm manager node are automatically made available to the Swarm workers. There is a legacy networking mode that required a dedicated consensus server so if you see instructions requiring Consul or etcd to be installed, it is probably dealing with the legacy stuff.

  1. git reset --hard
  2. cd solutions/lab-23
  3. ./clean-slate-protocol.sh
  4. cat create-network.sh
  5. ./create-network.sh

Lab 24: Docker Swarm (Global Services)

Swarm supports two types of services. One type, the Global Service, is a container that is targeted to all nodes in the swarm. So, if you have 10 nodes in your swarm, then all 10 will contain the global services you have deployed. The second type, the Replicated Service, is a container that is targeted to a specific deployment count. For example, if I have a stateless web application and I specify a replication of 3, then 3 out of my 10 containers will be running an instance of the web application Swarm Diagram

  1. git reset --hard
  2. cd solutions/lab-24
  3. ./clean-slate-protocol.sh
  4. cat create-global-service.sh
  5. ./create-global-service.sh

Notice all the nodes, including the manager node, now have hello-global service running on them? If I were to add another node to the swarm, it would also get told to run the service. When would you want to use a global service? I use them for "bookkeeping" type of containers such as DataDog or Consul.

Lab 25: Docker Swarm (Replicated Services)

Last time, we talked about global services. Today we'll look at replicated services. As the name suggests, the desire is to have multiple copies of a container running in the cluster. Containers housing stateless applications, such as a static web site, are candidates for replication. Containers that rely on local state will not work properly as a replicated service due to migration and load balancing issues. So what is a replicated service? If you need multiple containers to be running, probably for availability reasons, you can easily tell Docker that you would like N number of containers running at all times.

  1. git reset --hard
  2. cd solutions/lab-25
  3. ./clean-slate-protocol.sh
  4. cat create-replicated-service.sh
  5. ./create-replicated-service.sh

In the above example, we told Docker to deploy 2 copies of the alpine container into the cluster. We don't care what nodes are running the containers, just as long as there are two of them. If possible, Docker will schedule the containers on separate hosts. If a container fails, then it will be replaced. Very straightforward. Next time, we'll showcase constrained services which give us a bit more control over the placement of our containers.

Lab 26: Docker Swarm (Constrained Services)

Last time, we looked at replicated services. Today we'll look at a nuanced version of replicated services: constrained services. The primary difference between a replicated service and a constrained one is that we can put restrictions on where the containers can be run. The simplest constraint is to put containers on nodes that have been tagged with a particular label. You can also use other placement criteria using simple boolean expressions but the currently available selection attributes are more limited than what you'll find in other schedulers, such a Kubernetes or Nomad.

  1. git reset --hard
  2. cd solutions/lab-26
  3. ./clean-slate-protocol.sh
  4. cat create-constrained-service.sh
  5. ./create-constrained-service.sh

In this example, we are asking to have our 3 alpine containers to only run on worker nodes and Docker will do its best to comply. If there are no nodes tagged as being workers, Docker will wait until one becomes available and start the containers. Next time, we'll learn how to scale down our running services.

Lab 27: Docker Swarm (Scale Down)

Last time, we looked at constrained deployments. Today we'll see how to scale our services down. In truth, there really isn't much to do because Docker takes care of everything for us. All we need to do is to tell the swarm how many instances we need currently.

  1. git reset --hard
  2. cd solutions/lab-27
  3. ./clean-slate-protocol.sh
  4. cat scale-down-service.sh
  5. ./scale-down-service.sh

As you can see, we are telling Docker to scale back our hello-constrained service down to a single instance. The interesting part of this example is that we can see that the swarm is turning off instances on nodes, leaving us with the single instance. Again, this is an example of declarative operations. We're telling Docker what we want, not how to do it.

Lab 28: Docker Swarm (Removal)

Last time, we looked at how to scale down our services. Today, we'll look at how to remove them all together.

  1. git reset --hard
  2. cd solutions/lab-28
  3. ./clean-slate-protocol.sh
  4. cat remove-service.sh
  5. ./remove-service.sh

As you can see, removing a service is very straight forward and is simple as removing a file in Linux.

Lab 29: Docker Swarm (Upgrade)

Last time, we saw how simple it was to remove a service from the swarm. Today, we'll look at something a little more interesting: rolling upgrades. The scenario is this, you have an existing collection of services deployed and you need to upgrade them to current bits. You would love for the service to remain available during the upgrade process and avoid making your customers unhappy. How can this be done? The answer is Docker's rolling upgrades. The idea is simple, once the process is started, one by one a service in the swarm gets replaced with a newer version. During the process, you will have a mixture of the new and old bits so your solution cannot be sensitive to that fact. Lets see how this looks in practice.

  1. git reset --hard
  2. cd solutions/lab-29
  3. ./clean-slate-protocol.sh
  4. cat upgrade-service.sh
  5. ./upgrade-service.sh

In this simple example, we install version 3.0.6 of Redis into the swarm and later decide to upgrade Redis 3.0.7. This example is contrived and doesn't incorporate things you might do in a real setting, such as monitoring of the state of the containers as they transition or what to do if there is a problem during the replacement process.

Lab 30: Docker Swarm (Maintenance)

Last time, we looked at rolling upgrades. Today, we'll learn how to temporarily take a node off-line for maintenance. At some point, you are probably going to have turn your node off and perform some maintenance on the box it is running on. It could be a simple as upgrading the version of Docker or as complex as swapping out a drive. During that time, you want to tell the Swarm that the node is temporarily going away and that some other node needs to take its place in the interim. Thankfully, the process is pretty simple.

  1. git reset --hard
  2. cd solutions/lab-30
  3. ./clean-slate-protocol.sh
  4. cat maintenance-mode.sh
  5. ./maintenance-mode.sh

Things to note in the above session. First, the work shifts from delta to echo. Second, once we bring delta back on-line the work remains with echo: no rebalancing of the work occurs.

Lab 31: Docker Swarm (Service Mesh)

Docker Engine swarm mode makes it easy to publish ports for services to make them available to resources outside the swarm. All nodes participate in an ingress routing mesh. The routing mesh enables each node in the swarm to accept connections on published ports for any service running in the swarm, even if there’s no task running on the node. The routing mesh routes all incoming requests to published ports on available nodes to an active container.

Service Mesh Diagram

  1. git reset --hard
  2. cd solutions/lab-31
  3. ./clean-slate-protocol.sh
  4. cat service-mesh.sh
  5. ./service-mesh.sh
  6. adjust the docker-machine security group to allows port 80 traffic to flow
  7. run watch 'curl --silent <IP address> | python3 -m json.tool', noticing the changing address and HOSTNAME
  8. look up the public address of some of the other nodes and hit those
  9. adjust the scale up or down and see how results are affected
  10. docker-machine ssh bravo docker service scale nginx=2
  11. docker-machine ssh bravo docker service ps nginx

Lab 32: Function as a Service (FaaS)

The business has decided that in order to stay competitive, our product needs to support developer extension points. They want an experience similar to AWS Lambda where code written in a variety of programming languages can interact with our system in a safe and predictable manner. Luckily, we've already covered everything we'll need to produce a Docker-based proof of concept. In this lab, we'll create a FaaS implementation as a series of short shell scripts that interact with Docker that simulates the developer experience. The implementation must support the following:

  • JVM and Python based functions
  • functions that accept a string and that return a string
  • developer can specify the following runtime constraints
    • RAM
    • CPU
    • whether networking is needed or not
    • the name of the script to run
    • a file containing environment variables that the script can use for configuration

Create a script that launches a container using the appropriate image and runtime switches. You will have two scripts, one for each runtime. Your task is complete if the string passed to the script is printed in uppercase. There is a solution in solutions/lab-32 if you get stuck.

  1. git reset --hard
  2. cd labs/lab-32
  3. ./clean-slate-protocol.sh
  4. cat faas.env
  5. cat faas.groovy
  6. cat faas.py
  7. cat run-groovy-function.sh
  8. cat run-python-function.sh
  9. poke around Docker Hub to find the appropriate images

Lab 33: AWS Docker Registry

In this lab, we will host our images on Amazon instead of using public repositories. Normally, this is done for security and operational reasons. We will create the registry in our AWS account, push an image to it and have one of our classmates pull from it. The container simply prints the current date and time.

  1. git reset --hard
  2. git pull
  3. cd labs/lab-33
  4. ./clean-slate-protocol.sh
  5. edit Dockerfile so that it creates an image that runs the Linux date command
  6. edit create-docker-image.sh as needed to create the proper image
  7. ./create-docker-image.sh to create the image
  8. run the proper Docker command to verify the image built correctly
  9. edit test-image.sh so that it tests your image
  10. ./test-image.sh to verify your image works correctly
  11. log into your AWS account, navigating to Compute->Elastic Container Service
  12. create your repository. You can call it anything you want but aws-study-group will be used in the solution
  13. follow the instructions from Amazon on how to authenticate to your registry
  14. edit tag-image.sh so that it tags the existing image with a tag suitable for your new repository
  15. ./tag-image.sh to tag the image
  16. run the proper Docker command to verify the image got tagged correctly
  17. edit push.image.sh
  18. ./push-image.sh to push it to the registry
  19. use the console to verify the image made it
  20. select a classmate
  21. using the AWS console, give their account the ability to pull down your image
  22. have them pull down your image and run it
  23. Using the AWS console, figure out how to auto-delete images that are older than 30 days

Lab N: Consul, Service Discovery and Docker

Lab N: Personal Image Registry (4.8)

Lab N: Dockerfile Madness (advanced) (4.5.10.10)

Lab N: Docker Log Drivers (3.9)

Lab N: Guts

  1. /var/lib/docker/containers (3.15)
  2. /var/lib/docker (4.2)

Tips and Tricks

Troubleshooting

License and Credits

This project is licensed under the Apache License Version 2.0, January 2004.

About

Labs for the Docker study group

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Shell 92.2%
  • Groovy 3.2%
  • Python 3.0%
  • HTML 1.6%