This document contains the steps that were covered in the AWS 101 labs. Useful if you missed a class or forgot a step.
- An AWS account
- An SSH client, OpenSSH preferred
We'll deviate a bit from what was done in class and use only the Amazon Linux distribution.
- Log into the AWS console
- switch to the
N. Virginia
region -- we need everyone to be in the same region - Search for
ec2
in the search bar and bring up the EC2 Dashboard - Click
Launch Instance
and selectAmazon Linux AMI 2017.03.0 (HVM)
- Select
t2.micro
type and clickNext: Configure Instance Details
- Make sure to select a public subnet if you are not using the default VPC
- Make sure that
Auto-assign Public IP
is set toEnable
- Expand
Advanced Details
to reveal theUser data
section - Paste in the Bash script described below
- Select
Next: Add Storage
- Click
Add New Volume
and keep the defaults - Click
Next: Add Tags
andAdd Tag
- One by one, add the tags described below
- Click
Next: Configure Security Group
- Create a new security group using whatever
Security group name
andDescription
you want - Ensure that the
Source
reads0.0.0.0/0
or you might not be able to log in - Click
Review and Launch
and thenLaunch
- Either create a new SSH key or select a previously generated one
- Click
Launch Instance
- Click
View Instances
at the bottom of the screen - Wait until
Status Checks
reads2/2 checks passed
- Obtain the public ip address of your instance and ssh into it using your private key, eg
ssh -i my-private-key ec2-user@34.209.215.10
ls --all -l --human-readable
, note thatroot
has created a file during creationtouch ~/dolphins-suck.txt
to create a file in your home directorysudo yum update
to update the OSdf --print-type --human-readable
to list the currently mounted file systemslsblk --list --fs
to list available diskssudo mkfs.ext4 -L DATA /dev/xvdb
to create an ext4 file systemsudo mkdir /mnt/data
to create the volume's mount pointsudo mount /dev/xvdb /mnt/data
to mount the volumelsblk --list --fs
to see that the volume is now mountedsudo chown --recursive ec2-user:ec2-user /mnt/data
to make your account the volume's ownerls --all -l --human-readable /mnt
to show the new user and group valuestouch /mnt/data/dolphins-still-suck.txt
to put a file onto the new volumels --all -l --human-readable /mnt/data
to prove the file exists
#!/bin/bash
FILENAME=$(date)
touch "/home/ec2-user/${FILENAME}"
Name
- the name of your instance, egNCC-1701
Purpose
- the role or reason the instance will play in a project, eg.lab playground
Project
- the project name to attribute billing to, egAWS Study Group
Creator
- user or tool that created the instance, egluke@disney.com
Environment
- the context the instance is being used in, egproduction
Freetext
- any notes you might to store, eg.Root password is super-secret
We'll be observing how volumes behave and how to clone instances.
- ssh into the instance we created last time
df --print-type --human-readable
to list the currently mounted file systemssudo blkid /dev/xvdb
to get the UUID of the data volume- note the
LABEL
value sudo vi /etc/fstab
- add this line
LABEL=DATA /mnt/data ext4 defaults 1 1
- save the file
sudo mount /mnt/data
to mount the 2nd volumesudo shutdown -r now
to reboot the box- ssh back into the instance
df --print-type --human-readable
to list the currently mounted file systems
- Click
Volumes
in the console and select the volume that is attached as/dev/sdb
- Click
Actions
andCreate Snapshot
- Use
Data
as the name and whateverDescription
you want Create
and switch toSnapshots
view- Examine the tags and notice only
Name
is provided - Add in the missing tags by hand
- Log into the console and bring up the
EC2 Dashboard
- Click
Instances
and select the instance created in the last 1ab - Click
Actions
thenLaunch More Like This
- Click
Launch
and select the SSH key - Wait for the instance to come up and then ssh into it
- In the console, edit the
Name
tag to readClone
sudo yum update
-- notice how the updates to be applied againdf --print-type --human-readable
-- is the second volume mounted?lsblk --list --fs
-- is the second volume available?
- In the console, switch to
Instances
view and note the AZ id of the clone - Stop the clone
- In the console, switch to
Volumes
view - Select the clone instance
Create Volume
- Search by your snapshot's description and select it
- Select the same AZ that your instance lives in
Create
- Select the newly created volume (should be 100 GB in size)
- Fill in the tags
Actions
,Attach Volume
- Select your clone instance
- Leave
Device
to its default value Attach
and wait for it to complete- ssh into your instance
- Run the the previously outlined steps to mount the volume
- The volume already has a file system on it so don't format the volume
ls --all -l --human-readable /mnt/data
. Is your file still there?
- Ensure your instance has a file that distinguishes yours from others
- Select the
Instances
view - Select your original instance
Actions
,Image
,Create Image
- Fill in
Image Name
andImage Description
as you like Create Image
- Wait for the AMI to complete
- Select the AMI
- Fill in the tags
Launch
- Pick
t2.nano
Review and Launch
- Carefully review the settings and compare them to how the original instance looks
Cancel
and start again, this time going through the full wizard- Launch the instance, wait for it to spin up and ssh into it
- See if OS patches need to be applied
- What files are in the home folder?
- What files in the data folder?
- In the console, select
AMIs
- Select your AMI
Actions
,Modify Image Permissions
, change to public
- In the console, verify that you are in the
N. Virginia
region, Instances
Launch Instance
Community AMIs
- Search for an AMI created by someone else in the class
- Spin up the instance and poke around. See if it looks the same as the AMI creator expects.
- Look in both the home directory and data volume
- Pull up Amazon EC2 AMI Locator
- Find a
xenial
hvm:instance-store
AMI and click on its id - Select
m3.medium
, which will cost you $0.07/hour - ssh into the instance
- place a file in the home directory
Reboot
the instance- see if your file is still there
- Try and
Stop
the instance. Will it let you?
- select a stopped instance, a small type is preferred
Actions
,Instance Settings
,Change Instance Type
- select something from a different family and is larger, eg.
m3.medium
costing $0.07/hour - spin up the instance
- ssh into the instance
- poke around and verify the increased cores and ram, eg
cat /proc/meminfo
,cat /proc/cpuinfo
,top
- stop the instance
- change the instance type back to what it was
- Create a new EC2 instance of any size
- Ensure
Enable termination protection
is checked - After the instance is up, use the console to terminate it. What happens?
- Stop the instance
- Use the console to terminate the instance. What happens?
- ssh into an instance
curl http://169.254.169.254/latest/meta-data/
-- notice the trailing slash- try different endpoints, eg
curl http://169.254.169.254/latest/meta-data/instance-id
- Spin up an existing EC2 instance, one that has the extra data volume attached
- ssh into the instance
df -Th
to note the size of the volumes- From the console, find the data volume in the
Instances
view and click through - From the
Volumes
view,Actions
,Modify Volume
- Double the size of the volume and click
Modify
- Once complete,
df -Th
to see that the volume is still at its original size sudo file -s /dev/xvdb
to verify the volume's file systemlsblk
to verify that there is no partition that needs to be extended- Note the difference in size reporte by
df
andlsblk
sudo resize2fs /dev/xvdb
to expand the volume to its new sizedf -Th
to verify that the file system has expanded to match the new volume sizelsblk
anddf -h
should now agree
- Spin up an Amazon Linux EC2 instance,
t2.nano
ort2.micro
will do sudo yum update
-- patch any security vulnerabilitiessudo yum install docker
-- install Docker runtimegroups ec2-user
sudo usermod --append --groups docker ec2-user
groups ec2-user
sudo service docker restart
- log out and back into the instance or the next step will show an error
docker info
- install the Docker container using the script below
docker ps
curl localhost:8080/operations/health | python -m json.tool
curl localhost:8080/operations/info | python -m json.tool
curl localhost:8080/ | python -m json.tool
- create an AMI out of it. We'll use it to create multiple instances.
#!/bin/bash
APPLICATION_NAME=TLO
HOST_NAME=$(curl http://169.254.169.254/latest/meta-data/hostname)
AVAILABILITY_ZONE=$(curl http://169.254.169.254/latest/meta-data/placement/availability-zone)
INSTANCE_ID=$(curl http://169.254.169.254/latest/meta-data/instance-id)
INSTANCE_TYPE=$(curl http://169.254.169.254/latest/meta-data/instance-type)
CMD="docker run --detach \
--name aws-echo \
--network host \
--restart always \
--env INFO_APPLICATION_NAME=${APPLICATION_NAME} \
--env INFO_APPLICATION_INSTANCE=${INSTANCE_TYPE}:${INSTANCE_ID} \
--env INFO_APPLICATION_LOCATION=${AVAILABILITY_ZONE}:${HOST_NAME} \
kurron/spring-cloud-aws-echo:latest"
echo ${CMD}
${CMD}
- Use the AMI to launch at least 3 small instances.
- Use a wide open security group to avoid firewall issues
- Make sure the instances get assigned a public ip address
- After they spin up, grab the public ip addresses and test them from your Windows box or another EC2 instance
curl ip-address:8080/operations/health | python -m json.tool
curl localhost:8080/operations/info | python -m json.tool
curl ip-address:8080/ | python -m json.tool
Load Balancers
,Create Load Balancer
- Select
Classic Load Balancer
, we create anApplication Load Balancer
later - Name the balancer so you know it is the classic balancer
Load Balancer Port
should be80
andInstance Port
should be8080
Assign Security Groups
, select a wide open groupConfigure Security Settings
,Configure Health Check
Ping Port
should be8080
Ping Path
should be/operations/health
Interval
should be10
secondsHealthy threshold
should be2
Add EC2 Instances
- Select your newly spun up instances,
Add Tags
- Fill in the tags,
Review and Create
Create
- Select the
Instances
tab and wait for the instance status to beInService
- Note the balancer's DNS name
curl lb-dns-name:80/ | python -m json.tool
- Create a script to continually hit the ELB (see below)
- Run it and note the
served-by
value. It should change with each request. - Also note the
calculated-return-path
and how it reflects the "outside" view - Note how we hit the balancer at port
80
but the service lives at8080
- In the console, pull up the
Instances
to see which instances are showing as healthy - Select one instance and ssh into it
docker ps
to show the running containersdocker stop aws-echo
to turn off the container- Watch the watcher script. Has service been disrupted? How has the
served-by
changed? - Examine the balancer's
Monitoring
andInstances
tabs - Repeat the process for another node. How is service behaving now?
- Re-enable each service via
docker start aws-echo
and watch the script and console - How would the health check settings affect how quickly instances get removed and added?
- How do availability zones affect the load balancer and the EC2 instances?
- If we wanted the load balancer to front more than one type of application, what would we do?
- What non-HTTP applications might you front with a load balancer?
#!/bin/bash
# Your DNS name is going to be different
ELB=${1:-classic-load-balancer-1166062004.us-east-1.elb.amazonaws.com}
DELAY=${2:-2}
CMD="curl --silent http://${ELB}:80/"
for (( ; ; ))
do
# echo ${CMD}
${CMD} | python -m json.tool
sleep ${DELAY}
done
- Follow the instructions in the previous lab and create a Docker AMI
- Use the script below to install the correct containers. Don't use the script in the original instructions.
- For this example, we saved the script to
install-application.sh
./install-application.sh TLO / 8080
- to install the TLO application at port 8080./install-application.sh Mold-E / 9090
- to install the Mold-E application at port 9090curl --location --silent localhost:8080 | python -m json.tool
curl --location --silent localhost:8080/operations/health | python -m json.tool
curl --location --silent localhost:8080/operations/info | python -m json.tool
curl --location --silent localhost:9090 | python -m json.tool
curl --location --silent localhost:9090/operations/health | python -m json.tool
curl --location --silent localhost:9090/operations/info | python -m json.tool
- Create the AMI
#!/bin/bash
APPLICATION_NAME=${1:-FOO}
SERVER_PATH=${2:-/foo}
SERVER_PORT=${3:-1234}
HOST_NAME=$(curl http://169.254.169.254/latest/meta-data/hostname)
AVAILABILITY_ZONE=$(curl http://169.254.169.254/latest/meta-data/placement/availability-zone)
INSTANCE_ID=$(curl http://169.254.169.254/latest/meta-data/instance-id)
INSTANCE_TYPE=$(curl http://169.254.169.254/latest/meta-data/instance-type)
CMD="docker run --detach \
--name ${APPLICATION_NAME} \
--network host \
--restart always \
--env INFO_APPLICATION_NAME=${APPLICATION_NAME} \
--env INFO_APPLICATION_INSTANCE=${INSTANCE_TYPE}:${INSTANCE_ID} \
--env INFO_APPLICATION_LOCATION=${AVAILABILITY_ZONE}:${HOST_NAME} \
--env SERVER_CONTEXT-PATH=${SERVER_PATH} \
--env SERVER.PORT=${SERVER_PORT} \
kurron/spring-cloud-aws-echo:latest"
echo ${CMD}
${CMD}
- Use the newly created AMI to create 2 new instances
- Use
cURL
and ensure that both ports can be hit on each instance curl --location --silent 54.202.28.248:8080/operations/info | python -m json.tool
curl --location --silent 54.202.28.248:9090/operations/info | python -m json.tool
curl --location --silent 54.202.28.248:8080/ | python -m json.tool
curl --location --silent 54.202.28.248:9090/ | python -m json.tool
- Create a classic ELB
- Map port
1024
on the ELB to port8080
in the instances - Map port
2048
on the ELB to port9090
in the instances - Use
/operations/health
for the health check - Notice how only one of the two applications can provide the health check
- Use
cURL
and hit each port on the ELB (convenience script provided below) - Notice how each port maps to a different application
- Notice how we had to install both applications on each instance
- What are some of the drawbacks to this technique?
#!/bin/bash
ELB=${1:-dual-applications-classic-876351830.us-west-2.elb.amazonaws.com}
DELAY=${2:-2}
TLO="curl --location --silent ${ELB}:1024/operations/info"
MOLDE="curl --location --silent ${ELB}:2048/operations/info"
for (( ; ; ))
do
${TLO} | python -m json.tool
echo
${MOLDE} | python -m json.tool
sleep ${DELAY}
done
- Create another Docker AMI, this time do not install the docker containers
- Create 4 machines from the AMI but install user data with the Docker container script from the previous exercise
- ALB wants instances in at least two AZs so ensure you have the 4 instances split between 2 AZs
- Have 2 instance be the
TLO
application.APPLICATION_NAME=TLO
,SERVER_PATH=/tlo
,SERVER_PORT=8080
- Have 2 instance be the
Mold-E
application.APPLICATION_NAME=Mold-E
,SERVER_PATH=/mold-e
,SERVER_PORT=9090
- Hit the
/tlo/operations/info
and/mold-e/operations/info
endpoints - It is very important to test the endpoints with so many moving parts
Create Load Balancer
,Application Load Balancer
Name
can be anything you wantScheme
should beinternet-facting
HTTP Listeners
port80
- Select all subnets in your VPC. The UI is odd in this context.
- Set your tags
Configure Security Settings
,Configure Security Groups
- Select or create a wide open group -- all ports and addresses
Configure Routing
New Target Group
withName
ofTLO
,Protocol
ofHTTP
,Port
of8080
Health Checks
should be set toHTTP
and/tlo/operations/health
Register Targets
- Select only the TLO instances -- we need the others for another group
Review
andCreate
- Wait for the balancer to be provisioned
Target Groups
,Create Target Group
- Create a
Mold-E
group that points to port9090
and/mold-e/operations/health
- Select your load balancer
View/edit rules
,Add Rules
,Insert Rule
,Path
, forward/tlo/*
toTLO
,Save
- Repeat but forward
/mold-e/*
toMold-E
- Test the ELB via
curl --follow --silent ALB-Experiment-763587424.us-west-2.elb.amazonaws.com/tlo/operations/info
- Test the ELB via
curl --follow --silent ALB-Experiment-763587424.us-west-2.elb.amazonaws.com/mold-e/operations/info
- run the watcher script below
- turn off containers and watch how the ELB responds
#!/bin/bash
ELB=${1:-dual-applications-classic-876351830.us-west-2.elb.amazonaws.com}
DELAY=${2:-2}
TLO="curl --location --silent ${ELB}:80/tlo/operations/info"
MOLDE="curl --location --silent ${ELB}:80/mold-e/operations/info"
for (( ; ; ))
do
${TLO} | python -m json.tool
echo
${MOLDE} | python -m json.tool
sleep ${DELAY}
done
In this lab we'll create an empty cluster ready to accept work.
EC2 Container Service
,Create Cluster
- Use
transparent
as the cluster name Create an empty cluster
- Look up the AMI to use for your region
- Create 2 instances from the AMI -- ideally in different availability zones
t2.nano
should do just fine- Important:
IAM Role
and selectecsInstanceRole
from the list Advanced Details
and enter the script belowStorage
and leave defaults- Add your tags
- Use a wide-open security group
- Launch the instances
- Monitor the
ECS Instances
of theAmazon ECS
view - In a few minutes, your instances should be registered with the cluster
#!/bin/bash
echo ECS_CLUSTER=transparent >> /etc/ecs/ecs.config
Amazon ECS
,Task Definitions
,Create new Task Definition
Task Definition Name
: tloTask Role
: NoneNetwork Mode
: Host, ignoring the warningAdd container
Container name
: TLO-hard-portImage
: kurron/spring-cloud-aws-echo:latestMemory Limits (MB)
: Hard limit, 256Env Variables
: INFO_application_name, TLOEnv Variables
: SERVER_CONTEXT-PATH, /tloAdd
,Create
Amazon ECS
,Clusters
,transparent
Services,
Create`Task Definition
: tloCluster
: transparentSerivce Name
: tlo-hard-portNumber of tasks
: 2Create Service
,View Service
- Monitor the
Tasks
tab until both tasks are running
- Obtain the public addresses of both EC2 instances
- Hit each instance and make sure it responds
curl --location --silent 54.200.196.150:8080/tlo/operations/info | python -m json.tool
- Poke around the differnt views and see what information is available
- select one of the instances in the
ECS Instances
view Actions
,Drain Instances
- Did the 2nd container get moved to the remaining instance?
- How can we find out what happened?
- How can we fix things?
Application Load Balancer
- Defaults are appropriate but select all availability zones and subnets
- wide open security group
- Create a new target group but it is a dummy one that won't be used
- Do not register any targets
Review
andCreate
- Wait for it to be provisioned
- Select the
tlo
task definition Create new revision
- Change
Network Mode
fromHost
toBridge
- Click
TLO-Hard-Port
- Change the name to
TLO-Dynamic-Port
Add port mapping
- Leave
Host Port
blank,Container Port
8080 Update
Create
Amazon ECS
,Clusters
,transparent
Services,
Create`Task Definition
: tlo:2Cluster
: transparentSerivce Name
: tlo-dynamic-portNumber of tasks
: 2Configure ELB
- The defaults should be sufficient
Add to ELB
Listener port
should be80:HTTP
- Change
Path pattern
to/tlo*
Evaluation order
to 1- Change
Health check path
to/tlo/operations/health
Save
Create Service
,View Service
- Monitor the
Tasks
tab until both tasks are running - Check the
Events
tab - cURL the ELB endpoint:
curl --silent ecs-balancer-1527593673.us-west-2.elb.amazonaws.com/tlo/operations/info | python -m json.tool
- Traffic should be balanced between the instances
- Change number of desired tasks and see what happens
Create launch configuration
- Pick Amazon Linux AMI and run it on a
t2.nano
- Name it
ec2-asg
and clickEnable CloudWatch detailed monitoring
- Make sure it gets a public address because we need to SSH into the boxes
- Attach a wide-open security group
Create launch configuration
Close
Create Auto Scaling group
- Select
ec2-asg
from the list,Next Step
- Name it
ec2-asg
and start with 1 instance - Select your VPC and all public subnets within it
Advanced
and selectEnable CloudWatch detailed monitoring
Configure scaling policies
,Use scaling policies to adjust the capacity of this group
- Scale between 1 and 6 instances
Average CPU Utilization
andTarget Value
of 50- Give the instances 10 seconds to warm up
- Set
Health Check Grace
to 60 seconds - Set
Default Cooldown
to 60 seconds Configure Notifications
,Configure Tags
,Create Auto Scaling Group
- Verify that you have one instance spinning up
- SSH into the instance
sudo yum update
,sudo yum install stress
stress --verbose --cpu 1
- In another terminal, SSH into the instance and run
top
to ensure 100% of the CPU is being used - Monitor the
Activity History
andInstances
tab in your ASG view - What happens? How long does it take?
- Kill the stress program and monitor the ASG
- What happens? How long does it take?
- If we chose to
Disable scale-in
what would happen? - Change
Health Check Grace
to the default 300 seconds and re-run the experiment. - Try configuring Step Scaling and re-run the exeriment
- Clean up your ASG or you will always be running an instance!
Based on how long it took us to simulate resource exhaustion, this lab will just be about noting where in the console the auto scaling option for containers exists. You can trigger scaling events on your own time.
- select an existing service definition and
Update
it - Under
Optional configuration
, clickConfigure Service Auto Scaling
Configure Service Auto Scaling to adjust your service’s desired count
Add scaling policy
,Create new Alarm
- Notice how not only can we use the pre-baked RAM and CPU triggers but we can also create our own.
- NOTE: we can combine EC2 instance scaling with ECS container scaling
We'll need to understand the basics of API Gateway before we can move on to our final compute capability, Lambdas. Watching the API Gateway video prior to this lab is highly recommended.
- A working TLO echo service is required. Past labs provide at least 3 possible ways of doing this.
- Select
API Gateway
in the console Create API
,New API
- Name it
aws-study-group
, put in a description followed byCreate API
- Select the
/
resource and clickActions
and selectCreate Method
- Select
ANY
from the dropdown and click the check mark. Integration type
of HTTP, checkUse HTTP Proxy integration
, enter in your TLO endpoint, thenSave
- eg.
http://ecs-balancer-1527593673.us-west-2.elb.amazonaws.com/tlo/
- Click
Method Request
- Expand
HTTP Request Headers
thenAdd header
Name
should behost
and click the check mark.- Check the
Required
check box.
- Click
Integration Request
and expandHTTP Headers
Add header
,Name
should bex-forwarded-host
,Mapped from
should bemethod.request.header.host
, click the check mark
- Click the
Test
link - Test a
GET
request
- Select the API in the tree
Actions
,Deploy API
- Create a new stage called
production
andDeploy
- cURL the API endpoint, eg
https://a8eu4cq3gl.execute-api.us-west-2.amazonaws.com/production/
- Notice the
calculated-return-path
andx-forwarded-host
properties - cURL the
/operations/info
endpoint. What happens?
Resources
,Actions
,Create Resource
- Check
Configure as proxy resource
,Create Resource
HTTP Proxy
Endpoint URL
should have your endpoint plus the{proxy}
, eghttp://ecs-balancer-1527593673.us-west-2.elb.amazonaws.com/tlo/{proxy}
Save
- Publish the API again
- Try cURLing the operations endpoint again, eg
curl https://w4f4fmcaa6.execute-api.us-west-2.amazonaws.com/production/operations/info/
- Select
API Keys
Actions
,Create API Key
- Name it
aws-study-group
Save
- Select your API
- Select
ANY
,Method Request
- Change
API Key Required
to true - Republish the API
- cURL the slash endpoint. What happens?
- Add the
x-api-key
header using your API key to the request - Add
--header x-api-key:my api key
to the cURL command - cURL the slash endpoint. What happens?
- NOTE: you should be denied until a Usage Plan is attached
Usage Plans
,Create
- Name it
aws-study-group-usage-plan
- Set the
Rate
to1
and theBurst
to2
- Limit to
10
requests perDay
Next
Add API Stage
and assoctiate theproduction
stage of your APINext
,Add API Key to Usage Plan
- Associate the API key to the plan,
Done
- Try cURLing the endpoint again. What happens?
- Hit the endpoint several more times? What happens?
Usage Plans
,aws-study-group-usage-plan
,API Keys
,Extension
- Give the key 5 more requests for the day
- cURL again. What happens?
- Add the other resource to the Usage plan
- Hit
/operations/info
and verify that limits are working
We will be following the steps in Amazon Alexa Skill Recipe with Python 3.6. IMPORTANT: set up a developer account prior to attempting the lab.
{
"intents":[
{
"intent":"DinnerBotIntent"
}
]
}
DinnerBotIntent What should I have for dinner
DinnerBotIntent Do you have a dinner idea
DinnerBotIntent Whats for dinner
import random
dinnerOptions = [
"Chicken",
"Beef",
"Pork",
"Fish",
"Vegetarian"
]
def lambda_handler( event, context ):
dinner = random.choice( dinnerOptions )
response = {
'version': '1.0',
'response': {
'outputSpeech': {
'type': 'PlainText',
'text': dinner
}
}
}
return response
We'll be following the steps in Using AWS Lambda with Amazon API Gateway (On-Demand Over HTTPS)
We'll be following the steps in Using AWS Lambda with Scheduled Events
We'll be following the steps in Using AWS Lambda with Amazon S3
It has become apparent to me that people enjoy a "guided tour" as opposed to step-by-step instructions. So, for this lab, we'll outline a simple objective and let people figure out how to do it based on what was learned in the video.
- Write a CloudFormation template that creates a new ECS cluster
- Feel free to use my personal template as a reference
- You will need CLI access so I suggest creating a dedicated EC2 instance for this and future labs
- You will need to access the CloudFormation refence documentation to do this
- For bonus points, launch your stack via the console
- spin up an Amazon Linux instance making sure to assign it an admin role
- ensure the CLI is installed via
aws --version
- install git via
sudo yum update
followed bysudo yum install git
git --version
- clone the study group materials,
git clone https://github.com/kurron/aws-study-group-labs.git
cd aws-study-group-labs/labs/lab-20
./sanity-check-cli.sh
- edit
spin-up-instance-via-aws-cli.sh
so that is succeeds - run the script a second time. What happens?
- run the script a third time. What happens?
- create a script that terminates the instance you just created
- spin up the instance from Lab 20
cd aws-study-group-labs
- reset the area via
git reset --hard
git status
to ensure old changes no longer existgit pull --rebase
to update the area with the new lab- Open the CloudFormation view in the console
cd labs/lab-21/
- edit
cloudformation.yml
to utilize the supplied parameters ./validate-stack.sh
will validate the descriptor- edit and run
./create-stack.sh
to spin up the stack - look at the stack in the console, especially the outputs and events
./create-stack.sh
a second time. What happens?- edit and run
./destroy-stack.sh
to clean things up
- spin up the instance from Lab 20
cd aws-study-group-labs
- reset the area via
git reset --hard
git status
to ensure old changes no longer existgit pull --rebase
to update the area with the new labcd labs/lab-22/
./install-ansible.sh
to install Ansible- edit
playbook.yml
adding the missing pieces ./playbook.yml
to run your playbook- run the playbook a second time. What happens?
- Compare the Ansible descriptor to the CloudFormation one. Which do you prefer?
- spin up the instance from Lab 20
cd aws-study-group-labs
- reset the area via
git reset --hard
git status
to ensure old changes no longer existgit pull --rebase
to update the area with the new labcd labs/lab-23/
./install-terraform.sh
to install Terraform- edit
ec2-instance.tf
so it can spin up an EC2 instance. terraform init
to initialize Terraform -- don't supply a key pairterraform plan
to validate the file and see what changes are proposedterraform apply
to execute the proposed changesterraform show
to see the results of the execution- visit the console and verify that the instance fully comes up
- edit
ec2-instance.tf
and add the key pair attribute terraform plan
to see what changes are proposed- What is interesting in the output?
terraform apply
to execute the proposed changes- visit the console and verify that the instance fully comes up
- How many instances do you see?
terraform apply
again to see what happensterraform destroy
to clean up- Compare the Terraform descriptor to the Ansible and CloudFormation one. Which do you prefer?
- spin up the instance from Lab 20
cd aws-study-group-labs
- reset the area via
git reset --hard
git status
to ensure old changes no longer existgit pull --rebase
to update the area with the new labcd labs/lab-24/
./install-packer.sh
- pull up
links.txt
and browse the Packer documentation - edit
ami.json
filling in the missing information ./run-packer.sh
to execute your changes- in the console, create an instance from your newly minted AMI
- ssh into the instance and see if the file you added is there
- have Packer use Ansible to install a package, like
tree
- Spin up an EC2 instance running with an IAM role with admin rights
- Install the AWS cli
- 8.2 Creating Amazon SNS Topics and Subscriptions with AWS CLI
- We will follow his instructions to create a topic and publish to it. Since we are using a single account, our commands will be slightly different.
- AutoScaling and CloudWatch
- CloudFormation
- Storage in AWS
- Relational Database Service
- Simple Storage Service
- CloudFront
- ElastiCache
- Virtual Private Cloud
- Simple Notification Service
- Simple Email Service
- Simple Queuing Service
- Identity And Access Management (IAM)
- Route 53
- Building A 3 Tier Scalable Web Application In The Cloud
- Introduction To Ansible
- Setup, Installation And Configuration
- Ansible, Ansible-Docs, Ansible-Playbook
- Includes And Roles
- Playbooks Continued
- Special Topics
This project is licensed under the Apache License Version 2.0, January 2004.