Skip to content

cambridge-collection/terraform-aws-architecture-ecs

Repository files navigation

Terraform Module for ECS Architecture

This Terraform module builds the base AWS infrastructure needed to support ECS Services. This module implements the architecture described below Architectural Diagram.

The module will by default build an Auto Scaling Group in private subnets. VPC Endpoints create links to AWS services that will allow the ECS Services to run, as well as accessing ECR repositories.

By default the ec2_keypair input is null, and no Security Group rules allow for SSH access. EC2 instances can be accessed using AWS Session Manager.

In order to provide a running ECS Service, a child module will need to build it's own ECS Service and ECS Task Definition objects, in addition to an ECR Repository. It will not be possible to use publicly available container images as instances in private subnets will have no outbound route to the internet.

A child module will also need to create its own Load Balancer Listener and Load Balancer Target Group. The child module will need an aws_autoscaling_attachment resource to connect the target group to the Autoscaling Group created by this module. The attachment allows the Autoscaling Group to automatically register EC2 Instances with the target group.

To allow connectivity from the Internet, this module has been designed to operate through AWS CloudFront. This has not been implemented in this module as several CloudFront distributions may exist for the same ECS base architecture. The output alb_dns_name from this module can be used in the domain_name and origin_id arguments of a custom origin block in a aws_cloudfront_distribution resource in Terraform: this will direct requests from the CloudFront distribution to the public Application Load Balancer created by this module. This module also outputs a value waf_acl_arn that may be passed into the web_acl_id argument of a aws_cloudfront_distribution resource to protect the CloudFront distribution.

Requirements

No requirements.

Providers

Name Version
aws n/a
aws.us-east-1 n/a

Modules

No modules.

Resources

Name Type
aws_acm_certificate.default resource
aws_acm_certificate_validation.default resource
aws_autoscaling_group.this resource
aws_default_security_group.this resource
aws_ecs_capacity_provider.this resource
aws_ecs_cluster.this resource
aws_ecs_cluster_capacity_providers.this resource
aws_eip.nat_a resource
aws_iam_instance_profile.instance resource
aws_iam_role.instance resource
aws_iam_role_policy_attachment.ec2_container_service resource
aws_iam_role_policy_attachment.ssm_managed_instance resource
aws_internet_gateway.this resource
aws_launch_template.this resource
aws_lb.this resource
aws_lb_listener.https resource
aws_nat_gateway.nat_a resource
aws_route.cudl_vpc_ec2_route_igw resource
aws_route53_record.acm_validation_cname resource
aws_route53_zone.public resource
aws_route_table.main resource
aws_route_table.private_a resource
aws_route_table.private_b resource
aws_route_table.public resource
aws_route_table_association.private_a resource
aws_route_table_association.private_b resource
aws_route_table_association.public_a resource
aws_route_table_association.public_b resource
aws_s3_bucket.this resource
aws_s3_bucket_versioning.this resource
aws_security_group.alb resource
aws_security_group.asg resource
aws_security_group.vpc_endpoints resource
aws_security_group_rule.alb_ingress_cloudfront resource
aws_security_group_rule.asg_egress_s3 resource
aws_security_group_rule.vpc_endpoint_egress_self resource
aws_security_group_rule.vpc_endpoint_ingress_self resource
aws_subnet.private_a resource
aws_subnet.private_b resource
aws_subnet.public_a resource
aws_subnet.public_b resource
aws_vpc.this resource
aws_vpc_dhcp_options.this resource
aws_vpc_dhcp_options_association.this resource
aws_vpc_endpoint.interface resource
aws_vpc_endpoint.s3 resource
aws_wafv2_ip_set.this resource
aws_wafv2_web_acl.this resource
aws_ami.ecs_ami data source
aws_caller_identity.current data source
aws_cloudwatch_log_group.this data source
aws_ec2_managed_prefix_list.cloudfront data source
aws_ec2_managed_prefix_list.s3 data source
aws_iam_policy_document.assume_role_policy data source
aws_kms_alias.ebs data source
aws_region.current data source
aws_route53_zone.existing data source

Inputs

Name Description Type Default Required
alb_access_logs_bucket Name of the S3 Bucket for ALB access logs string "" no
alb_access_logs_enabled Whether to enable access logging for the ALB bool false no
alb_access_logs_prefix Prefix for objects in S3 bucket for ALB access logs string "" no
alb_enable_deletion_protection Whether to enable deletion protection for the ALB bool true no
alb_idle_timeout Idle timeout for load balancer string "60" no
alb_internal Whether the ALB should be internal (not public facing) bool false no
alb_listener_fixed_response_content_type Default content type for the fixed response of the default ALB Listener string "text/html" no
alb_listener_fixed_response_message_body Default message body for the fixed response of the default ALB Listener string "<!DOCTYPE html><body><h1>Hello World!</h1></body>" no
alb_listener_fixed_response_status_code Default status code for the fixed response of the default ALB Listener string "200" no
alb_listener_ssl_policy TLS security policy used by the default ALB Listener string "ELBSecurityPolicy-TLS13-1-2-2021-06" no
ami_architecture Name of the OS Architecture. Note must be compatible with the selected EC2 Instance Type string "x86_64" no
ami_name_prefix Prefix used to find an AMI for use in the Launch Template string "amzn2-ami-ecs-hvm-2.0*" no
asg_default_cooldown Number of seconds between scaling activities number 300 no
asg_desired_capacity Desired number of instances in the Autoscaling Group number 1 no
asg_enabled_metrics List of metrics enabled for the Auotscaling Group list(string)
[
"GroupTotalInstances",
"GroupInServiceInstances",
"GroupTerminatingInstances",
"GroupPendingInstances",
"GroupInServiceCapacity",
"GroupPendingCapacity",
"GroupTotalCapacity",
"GroupTerminatingCapacity"
]
no
asg_health_check_grace_period Grace Period before health checks are enabled. ECS Services can take 10 minutes to stabilise number 600 no
asg_health_check_type Type of health check for the Autoscaling Group. Can be EC2 or ELB string "EC2" no
asg_max_size Maximum number of instances in the Autoscaling Group number 1 no
asg_metrics_granularity Granularity of metrics collected by the Autoscaling Group string "1Minute" no
asg_min_size Minimum number of instances in the Autoscaling Group number 1 no
asg_termination_policies Termination Policies used by the Autoscaling Group list(string)
[
"OldestLaunchTemplate"
]
no
cloudwatch_log_group Name of the cloudwatch log group string n/a yes
ec2_ebs_volume_type Volume type used in EBS volumes string "gp3" no
ec2_instance_type EC2 Instance type used by EC2 Instances string "t3.small" no
ec2_keypair Name of EC2 Keypair for SSH access to EC2 instances string null no
ecs_capacity_provider_target_capacity_percent Percentage target capacity utilization for the autscaling group instances number 100 no
name_prefix Name prefix of the ECS Cluster and associated resources string n/a yes
route53_delegation_set_id The ID of the reusable delegation set whose NS records should be assigned to the hosted zone string null no
route53_zone_domain_name Name of the Domain Name used by the Route 53 Zone. Trailing dots are ignored string null no
route53_zone_force_destroy Whether to destroy the Route 53 Zone although records may still exist bool false no
route53_zone_id_existing ID of an existing Route 53 Hosted zone as an alternative to creating a hosted zone string null no
s3_bucket_force_destroy Whether to allow a non-empty bucket to be destroyed bool false no
s3_bucket_versioning_enabled Whether to enable S3 bucket versioning bool true no
tags Map of tags for adding to resources map(string) {} no
vpc_cidr_block CIDR block for the VPC string "10.0.0.0/16" no
vpc_endpoint_dns_record_ip_type The DNS records created for the endpoint string "ipv4" no
vpc_endpoint_services List of services to create VPC Endpoints for list(string)
[
"ssmmessages",
"ssm",
"ec2messages",
"ecr.api",
"ecr.dkr",
"ecs",
"ecs-agent",
"ecs-telemetry",
"logs"
]
no
vpc_public_subnet_public_ip Whether to automatically assign public IP addresses in the public subnets bool false no
waf_ip_set_addresses List of IPs for WAF IP Set Safelist list(string)
[
"131.111.0.0/16"
]
no

Outputs

Name Description
alb_arn ARN of the Application Load Balancer
alb_dns_name DNS Name of the Application Load Balancer
alb_https_listener_arn ARN of the default Application Load Balancer Listener on port 443
alb_security_group_id ID of the Security Group for the Application Load Balancer
asg_name Name of the Auto Scaling Group
asg_security_group_id ID of the Security Group for the Auto Scaling Group
cloudwatch_log_group_arn ARN of the CloudWatch Log Group
cloudwatch_log_group_name Name of the CloudWatch Log Group
ecs_cluster_arn ARN of the ECS Cluster
route53_public_hosted_zone Zone ID of the Route 53 Public Hosted Zone
s3_bucket Name of the S3 Bucket
s3_bucket_arn ARN of the S3 Bucket
vpc_id VPC ID
waf_acl_arn ARN of the WAF Web ACL

Note about Route 53 Hosted Zone

The name of the Route 53 Hosted Zone needs to match the value of a registered domain name. There is an option to manage an AWS registered domain in Terraform, but we feel it is best to avoid unintended to changes to the domain and this has been intentionally omitted from the configuration.

This module is able to optionally build a Route 53 hosted zone, or to look up an existing hosted zone using the input route53_zone_id_existing.

In AWS a Registered Domain specifies Name Servers which are used to resolve DNS queries for addresses in the domain. These can be set to the name servers created by a Route 53 Hosted Zone. However, given the hosted zone is managed by Terraform and may be replaced or destroyed, particularly in a sandbox environment, this could lead to frequent changes to the registered domain name servers. These would need to be made manually.

To avoid this, it is possible to create a Delegation Set to act as a bridge between the registered domain and the Route 53 Hosted Zone. To avoid this also changing frequently, it should be created outside Terraform. This can be done with the AWS CLI.

aws route53 create-reusable-delegation-set --caller-reference "$(date +"%s")"

This will output an Id attribute in the format "/delegationset/N10230772EN8U28YG7Z00". The second part of this ID (the unique reference) can be passed to the route53_delegation_set_id input for this module. If a Route 53 Hosted Zone is created by this module it is able to use the name servers specified in the delegation set. When the registered domain is updated to use the name servers listed in the output for the command above, this will allow the hosted zone to be changed without needing to update the name server details again.

Note that it is not possible to have two Route 53 hosted zones using the same domain name and the same name servers. A delegation set can only be used with a set of unique domain names.

Note about Auto Scaling Group Health Check Type

The Auto Scaling Group (ASG) can have a health check type of either EC2 or ELB. In this module the input asg_health_check_type is used to control this setting. The EC2 health check type checks the health of the EC2 instances connected to the ASG. This is a simple check determining if the instances are in a running state. The ELB health check uses health checks configured on the Elastic Load Balancer (ELB) attached to the ASG. It is not possible however to attach an Application Load Balancer (ALB) directly to the ASG. Target Groups are attached to the ASG, and thus will be done at the service level. The Target Group health checks are used by the ELB health checks configured on the ASG.

Using the ELB health check type has some implications:

  • Initially, it this module is built without any dependent service there will be no way of fulfilling the ASG health check as there will be no target groups. This will lead to continuous instance cycling.
  • Where multiple dependent services have been built on top of the same implementation of this module, the failure of one of these services will fail them all. This will impact otherwise healthy services.

The default value for the asg_health_check_type input has therefore been set to EC2. For implementations with a small number of stable services the value of ELB may be preferred as this provides a truer reflection of service health.

Note about Load Balancer Listener

AWS does not permit multiple listeners on the same Load Balancer using the same port. This could mean that only one target was available on the default HTTPs port 443. A solution to this is to generate a "default" load balancer listener assigned to port 443. The output alb_https_listener_arn allows dependent modules to build their own aws_lb_listener_rule resources referring to the ARN of the listener. This allows a single port to front several targets, using rules to determine the correct target.

An implementation of this could use the host_header condition to route requests using the value of the Host header sent by the client (note this header will be automatically inserted by most HTTP user agents such as curl). For example:

resource "aws_lb_listener_rule" "this" {
  listener_arn = var.alb_listener_arn
  priority     = var.alb_listener_rule_priority

  action {
    type             = "forward"
    target_group_arn = aws_lb_target_group.this.arn
  }

  condition {
    host_header {
      values = [aws_route53_record.this.name]
    }
  }
}

The certificate aws_acm_certificate.default in this module has no corresponding Route 53 A record - therefore requests to the domain name of the certificate will fail. This is by design. It is not possible to create a Load Balancer listener using the HTTPs protocol without referring to a certificate, and the resource here allows this. The domain name of the default certificate is not output by this module as it is not intended for re-use.

GitHub Workflows

Commit Lint

When pushing a commit to GitHub, or raising a Pull Request, a GitHub workflow will automatically run commitlint. This makes use of the Node.js module https://commitlint.js.org. The workflow has been configured to use the Conventional Commits specification https://www.conventionalcommits.org/en/v1.0.0/.

When commits are formatted using a canonical format such as Conventional Commits these can be used in the release process to determine the version number. The commit history can also be used to generate a CHANGELOG.md

For local development it is recommended to use a commit-msg Git hook. The following code should be placed in a file .git/hooks/commit-msg and made executable:

#!/bin/sh

if command -v commitlint &> /dev/null
then
  echo $1 | commitlint
fi

This is dependent on the commitlint tool, which can be installed using npm install -g @commitlint/{cli,config-conventional} (for a global installation). When working correctly the hook should fire whenever a commit is attempted, e.g.

git commit -m "silly message"
⧗   input: .git/COMMIT_EDITMSG
✖   subject may not be empty [subject-empty]
✖   type may not be empty [type-empty]

✖   found 2 problems, 0 warnings
ⓘ   Get help: https://github.com/conventional-changelog/commitlint/#what-is-commitlint

Configuration for the commitlint tool is located in the .commitlintrc.mjs file in the root of the project. This is also used by the commit-msg hook.

Semantic Release

On a push to the main branch, i.e. after a Pull Request has been approved and merged, a GitHub workflow will run Semantic Release. This will initiate a chain of actions that will automatically handle versioning, GitHub releases and Changelog generation.

The commit analyzer bundled with Semantic Release follows the same conventionalcommits schema as is used in the commit linting.

The semantic release tooling is configured in a file .releaserc.json in the root of the project.

Terraform Format Linting

When pushing a commit to GitHub, or raising a Pull Request, a GitHub workflow will automatically run terraform fmt -check -recursive in the root of the project. If this produces a non-zero exit code, the job will fail.

Terraform fmt is an "intentionally opinionated" command to rewrite configuration files to a recommended format. Any errors detected by the check can easily be remedied with by running the command terraform fmt -recursive which will automatically change all terraform code in the project.

For local development it is recommended to use a pre-commit hook to detect formatting issues before they are committed. Place the text below in a file .git/hooks/pre-commit and make this executable:

#!/bin/sh

# Short-circuit if terraform not found
if ! command -v terraform &> /dev/null
then
    echo "Terraform executable was not found in $PATH"
    exit 1
fi

FORMAT_CHECK=$(terraform fmt -check -recursive 2>&1)
FORMAT_RC=$?

if echo $FORMAT_CHECK | grep -q "Error"
then
    # Iterate over lines of $FORMAT_CHECK
    while IFS= read -r f; do
        echo "$f"
    done <<< "$FORMAT_CHECK"
    exit $FORMAT_RC
elif [ $FORMAT_RC -gt 0 ]
then
    printf "\033[1;31mThe following files need to be formatted:\033[m\n"
    for f in $FORMAT_CHECK; do
        echo $f
    done
    printf "Run \033[1;32mterraform fmt -recursive\033[m to fix\n"
    exit "$FORMAT_RC"
fi

When working correctly, the git hook will produce output when staged files are commited, e.g.:

$ git commit    
The following files need to be formatted:
main.tf
modules/grault/variables.tf
Run terraform fmt -recursive to fix

About

Base infrastructure needed to support AWS ECS Services

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages