Skip to content

fromcj/aws-study-group-labs

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

89 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Overview

This document contains the steps that were covered in the AWS 101 labs. Useful if you missed a class or forgot a step.

Prerequisites

  • An AWS account
  • An SSH client, OpenSSH preferred

Lab 1: EC2 Virtual Servers

We'll deviate a bit from what was done in class and use only the Amazon Linux distribution.

Steps

  1. Log into the AWS console
  2. switch to the N. Virginia region -- we need everyone to be in the same region
  3. Search for ec2 in the search bar and bring up the EC2 Dashboard
  4. Click Launch Instance and select Amazon Linux AMI 2017.03.0 (HVM)
  5. Select t2.micro type and click Next: Configure Instance Details
  6. Make sure to select a public subnet if you are not using the default VPC
  7. Make sure that Auto-assign Public IP is set to Enable
  8. Expand Advanced Details to reveal the User data section
  9. Paste in the Bash script described below
  10. Select Next: Add Storage
  11. Click Add New Volume and keep the defaults
  12. Click Next: Add Tags and Add Tag
  13. One by one, add the tags described below
  14. Click Next: Configure Security Group
  15. Create a new security group using whatever Security group name and Description you want
  16. Ensure that the Source reads 0.0.0.0/0 or you might not be able to log in
  17. Click Review and Launch and then Launch
  18. Either create a new SSH key or select a previously generated one
  19. Click Launch Instance
  20. Click View Instances at the bottom of the screen
  21. Wait until Status Checks reads 2/2 checks passed
  22. Obtain the public ip address of your instance and ssh into it using your private key, eg ssh -i my-private-key ec2-user@34.209.215.10
  23. ls --all -l --human-readable, note that root has created a file during creation
  24. touch ~/dolphins-suck.txt to create a file in your home directory
  25. sudo yum update to update the OS
  26. df --print-type --human-readable to list the currently mounted file systems
  27. lsblk --list --fs to list available disks
  28. sudo mkfs.ext4 -L DATA /dev/xvdb to create an ext4 file system
  29. sudo mkdir /mnt/data to create the volume's mount point
  30. sudo mount /dev/xvdb /mnt/data to mount the volume
  31. lsblk --list --fs to see that the volume is now mounted
  32. sudo chown --recursive ec2-user:ec2-user /mnt/data to make your account the volume's owner
  33. ls --all -l --human-readable /mnt to show the new user and group values
  34. touch /mnt/data/dolphins-still-suck.txt to put a file onto the new volume
  35. ls --all -l --human-readable /mnt/data to prove the file exists

User Data Bash Script

#!/bin/bash

FILENAME=$(date)
touch "/home/ec2-user/${FILENAME}"

Instance Tags

  • Name - the name of your instance, eg NCC-1701
  • Purpose - the role or reason the instance will play in a project, eg. lab playground
  • Project - the project name to attribute billing to, eg AWS Study Group
  • Creator - user or tool that created the instance, eg luke@disney.com
  • Environment - the context the instance is being used in, eg production
  • Freetext - any notes you might to store, eg. Root password is super-secret

Lab 2: EC2 Virtual Servers (continued)

We'll be observing how volumes behave and how to clone instances.

Permanently Mounting A Volume

  1. ssh into the instance we created last time
  2. df --print-type --human-readable to list the currently mounted file systems
  3. sudo blkid /dev/xvdb to get the UUID of the data volume
  4. note the LABEL value
  5. sudo vi /etc/fstab
  6. add this line LABEL=DATA /mnt/data ext4 defaults 1 1
  7. save the file
  8. sudo mount /mnt/data to mount the 2nd volume
  9. sudo shutdown -r now to reboot the box
  10. ssh back into the instance
  11. df --print-type --human-readable to list the currently mounted file systems

Snapshotting The Data Volume

  1. Click Volumes in the console and select the volume that is attached as /dev/sdb
  2. Click Actions and Create Snapshot
  3. Use Data as the name and whatever Description you want
  4. Create and switch to Snapshots view
  5. Examine the tags and notice only Name is provided
  6. Add in the missing tags by hand

Cloning An Instance Via Launch More Like This

  1. Log into the console and bring up the EC2 Dashboard
  2. Click Instances and select the instance created in the last 1ab
  3. Click Actions then Launch More Like This
  4. Click Launch and select the SSH key
  5. Wait for the instance to come up and then ssh into it
  6. In the console, edit the Name tag to read Clone
  7. sudo yum update -- notice how the updates to be applied again
  8. df --print-type --human-readable -- is the second volume mounted?
  9. lsblk --list --fs -- is the second volume available?

Attaching A Volume To A Running Instance

  1. In the console, switch to Instances view and note the AZ id of the clone
  2. Stop the clone
  3. In the console, switch to Volumes view
  4. Select the clone instance
  5. Create Volume
  6. Search by your snapshot's description and select it
  7. Select the same AZ that your instance lives in
  8. Create
  9. Select the newly created volume (should be 100 GB in size)
  10. Fill in the tags
  11. Actions, Attach Volume
  12. Select your clone instance
  13. Leave Device to its default value
  14. Attach and wait for it to complete
  15. ssh into your instance
  16. Run the the previously outlined steps to mount the volume
  17. The volume already has a file system on it so don't format the volume
  18. ls --all -l --human-readable /mnt/data. Is your file still there?

Lab 3: EC2 Virtual Servers (continued)

Cloning An Instance Via An AMI

  1. Ensure your instance has a file that distinguishes yours from others
  2. Select the Instances view
  3. Select your original instance
  4. Actions, Image, Create Image
  5. Fill in Image Name and Image Description as you like
  6. Create Image
  7. Wait for the AMI to complete
  8. Select the AMI
  9. Fill in the tags
  10. Launch
  11. Pick t2.nano
  12. Review and Launch
  13. Carefully review the settings and compare them to how the original instance looks
  14. Cancel and start again, this time going through the full wizard
  15. Launch the instance, wait for it to spin up and ssh into it
  16. See if OS patches need to be applied
  17. What files are in the home folder?
  18. What files in the data folder?

Sharing An AMI

  1. In the console, select AMIs
  2. Select your AMI
  3. Actions, Modify Image Permissions, change to public

Create An Instance Via Somebody Else's AMI

  1. In the console, verify that you are in the N. Virginia region,
  2. Instances
  3. Launch Instance
  4. Community AMIs
  5. Search for an AMI created by someone else in the class
  6. Spin up the instance and poke around. See if it looks the same as the AMI creator expects.
  7. Look in both the home directory and data volume

Using Ephemeral Storage

  1. Pull up Amazon EC2 AMI Locator
  2. Find a xenial hvm:instance-store AMI and click on its id
  3. Select m3.medium, which will cost you $0.07/hour
  4. ssh into the instance
  5. place a file in the home directory
  6. Reboot the instance
  7. see if your file is still there
  8. Try and Stop the instance. Will it let you?

Change the Instance Type

  1. select a stopped instance, a small type is preferred
  2. Actions, Instance Settings, Change Instance Type
  3. select something from a different family and is larger, eg.m3.medium costing $0.07/hour
  4. spin up the instance
  5. ssh into the instance
  6. poke around and verify the increased cores and ram, eg cat /proc/meminfo, cat /proc/cpuinfo, top
  7. stop the instance
  8. change the instance type back to what it was

Lab 4: EC2 Virtual Servers (continued)

Termination Protection

  1. Create a new EC2 instance of any size
  2. Ensure Enable termination protection is checked
  3. After the instance is up, use the console to terminate it. What happens?
  4. Stop the instance
  5. Use the console to terminate the instance. What happens?

Instance Metadata

  1. ssh into an instance
  2. curl http://169.254.169.254/latest/meta-data/ -- notice the trailing slash
  3. try different endpoints, eg curl http://169.254.169.254/latest/meta-data/instance-id

Expand Volume

  1. Spin up an existing EC2 instance, one that has the extra data volume attached
  2. ssh into the instance
  3. df -Th to note the size of the volumes
  4. From the console, find the data volume in the Instances view and click through
  5. From the Volumes view, Actions, Modify Volume
  6. Double the size of the volume and click Modify
  7. Once complete, df -Th to see that the volume is still at its original size
  8. sudo file -s /dev/xvdb to verify the volume's file system
  9. lsblk to verify that there is no partition that needs to be extended
  10. Note the difference in size reporte by df and lsblk
  11. sudo resize2fs /dev/xvdb to expand the volume to its new size
  12. df -Th to verify that the file system has expanded to match the new volume size
  13. lsblk and df -h should now agree

Lab 5: Elastic Load Balancers

Create Docker AMI

  1. Spin up an Amazon Linux EC2 instance, t2.nano or t2.micro will do
  2. sudo yum update -- patch any security vulnerabilities
  3. sudo yum install docker -- install Docker runtime
  4. groups ec2-user
  5. sudo usermod --append --groups docker ec2-user
  6. groups ec2-user
  7. sudo service docker restart
  8. log out and back into the instance or the next step will show an error
  9. docker info
  10. install the Docker container using the script below
  11. docker ps
  12. curl localhost:8080/operations/health | python -m json.tool
  13. curl localhost:8080/operations/info | python -m json.tool
  14. curl localhost:8080/ | python -m json.tool
  15. create an AMI out of it. We'll use it to create multiple instances.

Docker Container Installation Script

#!/bin/bash

APPLICATION_NAME=TLO
HOST_NAME=$(curl http://169.254.169.254/latest/meta-data/hostname)
AVAILABILITY_ZONE=$(curl http://169.254.169.254/latest/meta-data/placement/availability-zone)
INSTANCE_ID=$(curl http://169.254.169.254/latest/meta-data/instance-id)
INSTANCE_TYPE=$(curl http://169.254.169.254/latest/meta-data/instance-type)

CMD="docker run --detach \
                --name aws-echo \
                --network host \
                --restart always \
                --env INFO_APPLICATION_NAME=${APPLICATION_NAME} \
                --env INFO_APPLICATION_INSTANCE=${INSTANCE_TYPE}:${INSTANCE_ID} \
                --env INFO_APPLICATION_LOCATION=${AVAILABILITY_ZONE}:${HOST_NAME} \
                kurron/spring-cloud-aws-echo:latest"
echo ${CMD}
${CMD}

Spin Up Classic ELB Instances

  1. Use the AMI to launch at least 3 small instances.
  2. Use a wide open security group to avoid firewall issues
  3. Make sure the instances get assigned a public ip address
  4. After they spin up, grab the public ip addresses and test them from your Windows box or another EC2 instance
  5. curl ip-address:8080/operations/health | python -m json.tool
  6. curl localhost:8080/operations/info | python -m json.tool
  7. curl ip-address:8080/ | python -m json.tool

Classic ELB

  1. Load Balancers, Create Load Balancer
  2. Select Classic Load Balancer, we create an Application Load Balancer later
  3. Name the balancer so you know it is the classic balancer
  4. Load Balancer Port should be 80 and Instance Port should be 8080
  5. Assign Security Groups, select a wide open group
  6. Configure Security Settings, Configure Health Check
  7. Ping Port should be 8080
  8. Ping Path should be /operations/health
  9. Interval should be 10 seconds
  10. Healthy threshold should be 2
  11. Add EC2 Instances
  12. Select your newly spun up instances, Add Tags
  13. Fill in the tags, Review and Create
  14. Create
  15. Select the Instances tab and wait for the instance status to be InService
  16. Note the balancer's DNS name
  17. curl lb-dns-name:80/ | python -m json.tool

Lab 6: Elastic Load Balancers

Classic ELB Failover

  1. Create a script to continually hit the ELB (see below)
  2. Run it and note the served-by value. It should change with each request.
  3. Also note the calculated-return-path and how it reflects the "outside" view
  4. Note how we hit the balancer at port 80 but the service lives at 8080
  5. In the console, pull up the Instances to see which instances are showing as healthy
  6. Select one instance and ssh into it
  7. docker ps to show the running containers
  8. docker stop aws-echo to turn off the container
  9. Watch the watcher script. Has service been disrupted? How has the served-by changed?
  10. Examine the balancer's Monitoring and Instances tabs
  11. Repeat the process for another node. How is service behaving now?
  12. Re-enable each service via docker start aws-echo and watch the script and console
  13. How would the health check settings affect how quickly instances get removed and added?
  14. How do availability zones affect the load balancer and the EC2 instances?
  15. If we wanted the load balancer to front more than one type of application, what would we do?
  16. What non-HTTP applications might you front with a load balancer?

Classic Load Balancer Watch Script

#!/bin/bash

# Your DNS name is going to be different
ELB=${1:-classic-load-balancer-1166062004.us-east-1.elb.amazonaws.com}
DELAY=${2:-2}

CMD="curl --silent http://${ELB}:80/"

for (( ; ; ))
do
#  echo ${CMD}
   ${CMD} | python -m json.tool
   sleep ${DELAY}
done

Lab 7: Elastic Load Balancers (continued)

Create A Multi-Application AMI

  1. Follow the instructions in the previous lab and create a Docker AMI
  2. Use the script below to install the correct containers. Don't use the script in the original instructions.
  3. For this example, we saved the script to install-application.sh
  4. ./install-application.sh TLO / 8080 - to install the TLO application at port 8080
  5. ./install-application.sh Mold-E / 9090 - to install the Mold-E application at port 9090
  6. curl --location --silent localhost:8080 | python -m json.tool
  7. curl --location --silent localhost:8080/operations/health | python -m json.tool
  8. curl --location --silent localhost:8080/operations/info | python -m json.tool
  9. curl --location --silent localhost:9090 | python -m json.tool
  10. curl --location --silent localhost:9090/operations/health | python -m json.tool
  11. curl --location --silent localhost:9090/operations/info | python -m json.tool
  12. Create the AMI

Docker Container Installation Script

#!/bin/bash

APPLICATION_NAME=${1:-FOO}
SERVER_PATH=${2:-/foo}
SERVER_PORT=${3:-1234}

HOST_NAME=$(curl http://169.254.169.254/latest/meta-data/hostname)
AVAILABILITY_ZONE=$(curl http://169.254.169.254/latest/meta-data/placement/availability-zone)
INSTANCE_ID=$(curl http://169.254.169.254/latest/meta-data/instance-id)
INSTANCE_TYPE=$(curl http://169.254.169.254/latest/meta-data/instance-type)

CMD="docker run --detach \
                --name ${APPLICATION_NAME} \
                --network host \
                --restart always \
                --env INFO_APPLICATION_NAME=${APPLICATION_NAME} \
                --env INFO_APPLICATION_INSTANCE=${INSTANCE_TYPE}:${INSTANCE_ID} \
                --env INFO_APPLICATION_LOCATION=${AVAILABILITY_ZONE}:${HOST_NAME} \
                --env SERVER_CONTEXT-PATH=${SERVER_PATH} \
                --env SERVER.PORT=${SERVER_PORT} \
                kurron/spring-cloud-aws-echo:latest"
echo ${CMD}
${CMD}

Spin Up Multi-Application Instances

  1. Use the newly created AMI to create 2 new instances
  2. Use cURL and ensure that both ports can be hit on each instance
  3. curl --location --silent 54.202.28.248:8080/operations/info | python -m json.tool
  4. curl --location --silent 54.202.28.248:9090/operations/info | python -m json.tool
  5. curl --location --silent 54.202.28.248:8080/ | python -m json.tool
  6. curl --location --silent 54.202.28.248:9090/ | python -m json.tool

Support Multiple Applications (Classic Load Balancer)

  1. Create a classic ELB
  2. Map port 1024 on the ELB to port 8080 in the instances
  3. Map port 2048 on the ELB to port 9090 in the instances
  4. Use /operations/health for the health check
  5. Notice how only one of the two applications can provide the health check
  6. Use cURL and hit each port on the ELB (convenience script provided below)
  7. Notice how each port maps to a different application
  8. Notice how we had to install both applications on each instance
  9. What are some of the drawbacks to this technique?

Dual Application ELB Watch Script

#!/bin/bash

ELB=${1:-dual-applications-classic-876351830.us-west-2.elb.amazonaws.com}
DELAY=${2:-2}

TLO="curl --location --silent ${ELB}:1024/operations/info"
MOLDE="curl --location --silent ${ELB}:2048/operations/info"

for (( ; ; ))
do
   ${TLO} | python -m json.tool
   echo
   ${MOLDE} | python -m json.tool
   sleep ${DELAY}
done

Lab 8: Elastic Load Balancers (continued)

Spin Up ELB Instances (Application Load Balancer)

  1. Create another Docker AMI, this time do not install the docker containers
  2. Create 4 machines from the AMI but install user data with the Docker container script from the previous exercise
  3. ALB wants instances in at least two AZs so ensure you have the 4 instances split between 2 AZs
  4. Have 2 instance be the TLO application. APPLICATION_NAME=TLO, SERVER_PATH=/tlo, SERVER_PORT=8080
  5. Have 2 instance be the Mold-E application. APPLICATION_NAME=Mold-E, SERVER_PATH=/mold-e, SERVER_PORT=9090
  6. Hit the /tlo/operations/info and /mold-e/operations/info endpoints
  7. It is very important to test the endpoints with so many moving parts

ELB (Application Load Balancer)

  1. Create Load Balancer, Application Load Balancer
  2. Name can be anything you want
  3. Scheme should be internet-facting
  4. HTTP Listeners port 80
  5. Select all subnets in your VPC. The UI is odd in this context.
  6. Set your tags
  7. Configure Security Settings, Configure Security Groups
  8. Select or create a wide open group -- all ports and addresses
  9. Configure Routing
  10. New Target Group with Name of TLO, Protocol of HTTP, Port of 8080
  11. Health Checks should be set to HTTP and /tlo/operations/health
  12. Register Targets
  13. Select only the TLO instances -- we need the others for another group
  14. Review and Create
  15. Wait for the balancer to be provisioned
  16. Target Groups, Create Target Group
  17. Create a Mold-E group that points to port 9090 and /mold-e/operations/health
  18. Select your load balancer
  19. View/edit rules, Add Rules, Insert Rule, Path, forward /tlo/* to TLO, Save
  20. Repeat but forward /mold-e/* to Mold-E
  21. Test the ELB via curl --follow --silent ALB-Experiment-763587424.us-west-2.elb.amazonaws.com/tlo/operations/info
  22. Test the ELB via curl --follow --silent ALB-Experiment-763587424.us-west-2.elb.amazonaws.com/mold-e/operations/info
  23. run the watcher script below
  24. turn off containers and watch how the ELB responds

Dual Application ALB Watch Script

#!/bin/bash

ELB=${1:-dual-applications-classic-876351830.us-west-2.elb.amazonaws.com}
DELAY=${2:-2}

TLO="curl --location --silent ${ELB}:80/tlo/operations/info"
MOLDE="curl --location --silent ${ELB}:80/mold-e/operations/info"

for (( ; ; ))
do
   ${TLO} | python -m json.tool
   echo
   ${MOLDE} | python -m json.tool
   sleep ${DELAY}
done

Lab 9: EC2 Container Service: Creating The Cluster

In this lab we'll create an empty cluster ready to accept work.

Create Empty Cluster

  1. EC2 Container Service, Create Cluster
  2. Use transparent as the cluster name
  3. Create an empty cluster

Create ECS Instances

  1. Look up the AMI to use for your region
  2. Create 2 instances from the AMI -- ideally in different availability zones
  3. t2.nano should do just fine
  4. Important: IAM Role and select ecsInstanceRole from the list
  5. Advanced Details and enter the script below
  6. Storage and leave defaults
  7. Add your tags
  8. Use a wide-open security group
  9. Launch the instances
  10. Monitor the ECS Instances of the Amazon ECS view
  11. In a few minutes, your instances should be registered with the cluster

ECS Instance User Data Script

#!/bin/bash
echo ECS_CLUSTER=transparent >> /etc/ecs/ecs.config

Create ECS Task Definition

  1. Amazon ECS, Task Definitions, Create new Task Definition
  2. Task Definition Name: tlo
  3. Task Role: None
  4. Network Mode: Host, ignoring the warning
  5. Add container
  6. Container name: TLO-hard-port
  7. Image: kurron/spring-cloud-aws-echo:latest
  8. Memory Limits (MB): Hard limit, 256
  9. Env Variables: INFO_application_name, TLO
  10. Env Variables: SERVER_CONTEXT-PATH, /tlo
  11. Add, Create

Create ECS Service

  1. Amazon ECS, Clusters, transparent
  2. Services, Create`
  3. Task Definition: tlo
  4. Cluster: transparent
  5. Serivce Name: tlo-hard-port
  6. Number of tasks: 2
  7. Create Service, View Service
  8. Monitor the Tasks tab until both tasks are running

Test The Services, The Ugly Way

  1. Obtain the public addresses of both EC2 instances
  2. Hit each instance and make sure it responds
  3. curl --location --silent 54.200.196.150:8080/tlo/operations/info | python -m json.tool
  4. Poke around the differnt views and see what information is available

Take One Instance Off-line

  1. select one of the instances in the ECS Instances view
  2. Actions, Drain Instances
  3. Did the 2nd container get moved to the remaining instance?
  4. How can we find out what happened?
  5. How can we fix things?

Lab 10: EC2 Container Service: Using an ELB

Create Load Balancer

  1. Application Load Balancer
  2. Defaults are appropriate but select all availability zones and subnets
  3. wide open security group
  4. Create a new target group but it is a dummy one that won't be used
  5. Do not register any targets
  6. Review and Create
  7. Wait for it to be provisioned

Update Task Definition

  1. Select the tlo task definition
  2. Create new revision
  3. Change Network Mode from Host to Bridge
  4. Click TLO-Hard-Port
  5. Change the name to TLO-Dynamic-Port
  6. Add port mapping
  7. Leave Host Port blank, Container Port 8080
  8. Update
  9. Create

Create Load Balanced ECS Service

  1. Amazon ECS, Clusters, transparent
  2. Services, Create`
  3. Task Definition: tlo:2
  4. Cluster: transparent
  5. Serivce Name: tlo-dynamic-port
  6. Number of tasks: 2
  7. Configure ELB
  8. The defaults should be sufficient
  9. Add to ELB
  10. Listener port should be 80:HTTP
  11. Change Path pattern to /tlo*
  12. Evaluation order to 1
  13. Change Health check path to /tlo/operations/health
  14. Save
  15. Create Service, View Service
  16. Monitor the Tasks tab until both tasks are running
  17. Check the Events tab
  18. cURL the ELB endpoint: curl --silent ecs-balancer-1527593673.us-west-2.elb.amazonaws.com/tlo/operations/info | python -m json.tool
  19. Traffic should be balanced between the instances
  20. Change number of desired tasks and see what happens

Lab 11: EC2 and Auto Scaling Groups

Create Launch Configuration

  1. Create launch configuration
  2. Pick Amazon Linux AMI and run it on a t2.nano
  3. Name it ec2-asg and click Enable CloudWatch detailed monitoring
  4. Make sure it gets a public address because we need to SSH into the boxes
  5. Attach a wide-open security group
  6. Create launch configuration
  7. Close

Create Auto Scaling Group

  1. Create Auto Scaling group
  2. Select ec2-asg from the list, Next Step
  3. Name it ec2-asg and start with 1 instance
  4. Select your VPC and all public subnets within it
  5. Advanced and select Enable CloudWatch detailed monitoring
  6. Configure scaling policies, Use scaling policies to adjust the capacity of this group
  7. Scale between 1 and 6 instances
  8. Average CPU Utilization and Target Value of 50
  9. Give the instances 10 seconds to warm up
  10. Set Health Check Grace to 60 seconds
  11. Set Default Cooldown to 60 seconds
  12. Configure Notifications, Configure Tags, Create Auto Scaling Group
  13. Verify that you have one instance spinning up

Simulate High CPU Load

  1. SSH into the instance
  2. sudo yum update, sudo yum install stress
  3. stress --verbose --cpu 1
  4. In another terminal, SSH into the instance and run top to ensure 100% of the CPU is being used
  5. Monitor the Activity History and Instances tab in your ASG view
  6. What happens? How long does it take?
  7. Kill the stress program and monitor the ASG
  8. What happens? How long does it take?
  9. If we chose to Disable scale-in what would happen?
  10. Change Health Check Grace to the default 300 seconds and re-run the experiment.
  11. Try configuring Step Scaling and re-run the exeriment
  12. Clean up your ASG or you will always be running an instance!

Lab 12: EC2 Container Service: Using an ELB and Auto Scaling Group

Based on how long it took us to simulate resource exhaustion, this lab will just be about noting where in the console the auto scaling option for containers exists. You can trigger scaling events on your own time.

  1. select an existing service definition and Update it
  2. Under Optional configuration, click Configure Service Auto Scaling
  3. Configure Service Auto Scaling to adjust your service’s desired count
  4. Add scaling policy, Create new Alarm
  5. Notice how not only can we use the pre-baked RAM and CPU triggers but we can also create our own.
  6. NOTE: we can combine EC2 instance scaling with ECS container scaling

Lab 13: API Gateway

We'll need to understand the basics of API Gateway before we can move on to our final compute capability, Lambdas. Watching the API Gateway video prior to this lab is highly recommended.

Create the API

  1. A working TLO echo service is required. Past labs provide at least 3 possible ways of doing this.
  2. Select API Gateway in the console
  3. Create API, New API
  4. Name it aws-study-group, put in a description followed by Create API
  5. Select the / resource and click Actions and select Create Method
  6. Select ANY from the dropdown and click the check mark.
  7. Integration type of HTTP, check Use HTTP Proxy integration, enter in your TLO endpoint, then Save
  8. eg. http://ecs-balancer-1527593673.us-west-2.elb.amazonaws.com/tlo/

Have Method Request Forward Host Header

  1. Click Method Request
  2. Expand HTTP Request Headers then Add header
  3. Name should be host and click the check mark.
  4. Check the Required check box.

Have Integration Request Translate Host Header

  1. Click Integration Request and expand HTTP Headers
  2. Add header, Name should be x-forwarded-host, Mapped from should be method.request.header.host, click the check mark

Test Gateway Endpoint Internally

  1. Click the Test link
  2. Test a GET request

Publish The API

  1. Select the API in the tree
  2. Actions, Deploy API
  3. Create a new stage called production and Deploy
  4. cURL the API endpoint, eg https://a8eu4cq3gl.execute-api.us-west-2.amazonaws.com/production/
  5. Notice the calculated-return-path and x-forwarded-host properties
  6. cURL the /operations/info endpoint. What happens?

Proxy After Slash

  1. Resources, Actions, Create Resource
  2. Check Configure as proxy resource, Create Resource
  3. HTTP Proxy
  4. Endpoint URL should have your endpoint plus the {proxy}, eg http://ecs-balancer-1527593673.us-west-2.elb.amazonaws.com/tlo/{proxy}
  5. Save
  6. Publish the API again
  7. Try cURLing the operations endpoint again, eg curl https://w4f4fmcaa6.execute-api.us-west-2.amazonaws.com/production/operations/info/

Lab 14: API Gateway Continued

Generate API Key

  1. Select API Keys
  2. Actions, Create API Key
  3. Name it aws-study-group
  4. Save

Use API Key

  1. Select your API
  2. Select ANY, Method Request
  3. Change API Key Required to true
  4. Republish the API
  5. cURL the slash endpoint. What happens?
  6. Add the x-api-key header using your API key to the request
  7. Add --header x-api-key:my api key to the cURL command
  8. cURL the slash endpoint. What happens?
  9. NOTE: you should be denied until a Usage Plan is attached

Configure Usage Plan

  1. Usage Plans, Create
  2. Name it aws-study-group-usage-plan
  3. Set the Rate to 1 and the Burst to 2
  4. Limit to 10 requests per Day
  5. Next
  6. Add API Stage and assoctiate the production stage of your API
  7. Next, Add API Key to Usage Plan
  8. Associate the API key to the plan, Done
  9. Try cURLing the endpoint again. What happens?

Exceed Usage Plan

  1. Hit the endpoint several more times? What happens?
  2. Usage Plans, aws-study-group-usage-plan, API Keys, Extension
  3. Give the key 5 more requests for the day
  4. cURL again. What happens?

Configure Usage Plan Part Duex

  1. Add the other resource to the Usage plan
  2. Hit /operations/info and verify that limits are working

Lab 15: AWS Lambda -- What's For Dinner, Alexa?

We will be following the steps in Amazon Alexa Skill Recipe with Python 3.6. IMPORTANT: set up a developer account prior to attempting the lab.

Intents

{
    "intents":[
        {
            "intent":"DinnerBotIntent"
        }
    ]
}

Sample Utterances

DinnerBotIntent What should I have for dinner
DinnerBotIntent Do you have a dinner idea
DinnerBotIntent Whats for dinner

Lambda Code

import random

dinnerOptions = [
    "Chicken",
    "Beef",
    "Pork",
    "Fish",
    "Vegetarian"
]

def lambda_handler( event, context ):
    dinner = random.choice( dinnerOptions )
    response = {
        'version': '1.0',
        'response': {
            'outputSpeech': {
                'type': 'PlainText',
                'text': dinner
            }
        }
    }
    return response

Lab 16: AWS Lambda with API Gateway

We'll be following the steps in Using AWS Lambda with Amazon API Gateway (On-Demand Over HTTPS)

Lab 17: AWS Lambda and Scheduled Events

We'll be following the steps in Using AWS Lambda with Scheduled Events

Lab 18: AWS Lambda and S3

We'll be following the steps in Using AWS Lambda with Amazon S3

Lab 19: AWS CloudFormation

It has become apparent to me that people enjoy a "guided tour" as opposed to step-by-step instructions. So, for this lab, we'll outline a simple objective and let people figure out how to do it based on what was learned in the video.

  • Write a CloudFormation template that creates a new ECS cluster
  • Feel free to use my personal template as a reference
  • You will need CLI access so I suggest creating a dedicated EC2 instance for this and future labs
  • You will need to access the CloudFormation refence documentation to do this
  • For bonus points, launch your stack via the console

Lab 20: Automation and AWS CLI

  1. spin up an Amazon Linux instance making sure to assign it an admin role
  2. ensure the CLI is installed via aws --version
  3. install git via sudo yum update followed by sudo yum install git
  4. git --version
  5. clone the study group materials, git clone https://github.com/kurron/aws-study-group-labs.git
  6. cd aws-study-group-labs/labs/lab-20
  7. ./sanity-check-cli.sh
  8. edit spin-up-instance-via-aws-cli.sh so that is succeeds
  9. run the script a second time. What happens?
  10. run the script a third time. What happens?
  11. create a script that terminates the instance you just created

Lab 21: Automation and CloudFormation

  1. spin up the instance from Lab 20
  2. cd aws-study-group-labs
  3. reset the area via git reset --hard
  4. git status to ensure old changes no longer exist
  5. git pull --rebase to update the area with the new lab
  6. Open the CloudFormation view in the console
  7. cd labs/lab-21/
  8. edit cloudformation.yml to utilize the supplied parameters
  9. ./validate-stack.sh will validate the descriptor
  10. edit and run ./create-stack.sh to spin up the stack
  11. look at the stack in the console, especially the outputs and events
  12. ./create-stack.sh a second time. What happens?
  13. edit and run ./destroy-stack.sh to clean things up

Lab 22: Automation and Ansible

  1. spin up the instance from Lab 20
  2. cd aws-study-group-labs
  3. reset the area via git reset --hard
  4. git status to ensure old changes no longer exist
  5. git pull --rebase to update the area with the new lab
  6. cd labs/lab-22/
  7. ./install-ansible.sh to install Ansible
  8. edit playbook.ymladding the missing pieces
  9. ./playbook.yml to run your playbook
  10. run the playbook a second time. What happens?
  11. Compare the Ansible descriptor to the CloudFormation one. Which do you prefer?

Lab 23: Automation and Terraform

  1. spin up the instance from Lab 20
  2. cd aws-study-group-labs
  3. reset the area via git reset --hard
  4. git status to ensure old changes no longer exist
  5. git pull --rebase to update the area with the new lab
  6. cd labs/lab-23/
  7. ./install-terraform.sh to install Terraform
  8. edit ec2-instance.tf so it can spin up an EC2 instance.
  9. terraform init to initialize Terraform -- don't supply a key pair
  10. terraform plan to validate the file and see what changes are proposed
  11. terraform apply to execute the proposed changes
  12. terraform show to see the results of the execution
  13. visit the console and verify that the instance fully comes up
  14. edit ec2-instance.tf and add the key pair attribute
  15. terraform plan to see what changes are proposed
  16. What is interesting in the output?
  17. terraform apply to execute the proposed changes
  18. visit the console and verify that the instance fully comes up
  19. How many instances do you see?
  20. terraform apply again to see what happens
  21. terraform destroy to clean up
  22. Compare the Terraform descriptor to the Ansible and CloudFormation one. Which do you prefer?

Lab 24: Automation and Packer

  1. spin up the instance from Lab 20
  2. cd aws-study-group-labs
  3. reset the area via git reset --hard
  4. git status to ensure old changes no longer exist
  5. git pull --rebase to update the area with the new lab
  6. cd labs/lab-24/
  7. ./install-packer.sh
  8. pull up links.txt and browse the Packer documentation
  9. edit ami.json filling in the missing information
  10. ./run-packer.sh to execute your changes
  11. in the console, create an instance from your newly minted AMI
  12. ssh into the instance and see if the file you added is there
  13. have Packer use Ansible to install a package, like tree

Lab 25: Simple Notification Service

  1. Spin up an EC2 instance running with an IAM role with admin rights
  2. Install the AWS cli
  3. 8.2 Creating Amazon SNS Topics and Subscriptions with AWS CLI
  4. We will follow his instructions to create a topic and publish to it. Since we are using a single account, our commands will be slightly different.

Proposed Remaining Module Ideas

Automation via AWS CLI and Bash

Automation via Terraform

  1. AutoScaling and CloudWatch
  2. CloudFormation
  3. Storage in AWS
  4. Relational Database Service
  5. Simple Storage Service
  6. CloudFront
  7. ElastiCache
  8. Virtual Private Cloud
  9. Simple Notification Service
  10. Simple Email Service
  11. Simple Queuing Service
  12. Identity And Access Management (IAM)
  13. Route 53
  14. Building A 3 Tier Scalable Web Application In The Cloud
  1. Introduction To Ansible
  2. Setup, Installation And Configuration
  3. Ansible, Ansible-Docs, Ansible-Playbook
  4. Includes And Roles
  5. Playbooks Continued
  6. Special Topics

Tips and Tricks

Troubleshooting

License and Credits

This project is licensed under the Apache License Version 2.0, January 2004.

About

Recaps of the live labs for the AWS Study Group

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Shell 87.0%
  • HCL 13.0%