Skip to content

Execute Gitlab jobs on auto-scaled EC2 instances using the Docker Machine executor.

License

Notifications You must be signed in to change notification settings

pepperize/cdk-autoscaling-gitlab-runner

Repository files navigation

PRs Welcome GitHub npm (scoped) PyPI Nuget Sonatype Nexus (Releases) GitHub Workflow Status (branch) GitHub release (latest SemVer)

AWS CDK GitLab Runner autoscaling on EC2

This project provides a CDK construct to execute jobs on auto-scaled EC2 instances using the Docker Machine executor.

Running out of Runner minutes, using Docker-in-Docker (dind), speed up jobs with shared S3 Cache, cross compiling/building environment multiarch, cost effective autoscaling on EC2, deploy directly from AWS accounts (without AWS Access Key), running on Spot instances, having a bigger build log size

View on Construct Hub

Install

TypeScript

npm install @pepperize/cdk-autoscaling-gitlab-runner

or

yarn add @pepperize/cdk-autoscaling-gitlab-runner

Python

pip install pepperize.cdk-autoscaling-gitlab-runner

C# / .Net

dotnet add package Pepperize.CDK.AutoscalingGitlabRunner

Java

<dependency>
  <groupId>com.pepperize</groupId>
  <artifactId>cdk-autoscaling-gitlab-runner</artifactId>
  <version>${cdkAutoscalingGitlabRunner.version}</version>
</dependency>

Quickstart

  1. Create a new AWS CDK App in TypeScript with projen

    mkdir gitlab-runner
    cd gitlab-runner
    git init
    npx projen new awscdk-app-ts
  2. Configure your project in .projenrc.js

    • Add deps: ["@pepperize/cdk-autoscaling-gitlab-runner"],
  3. Update project files and install dependencies

    npx projen
  4. Register a new runner

    Registering runners:

    • For a shared runner, go to the GitLab Admin Area and click Overview > Runners
    • For a group runner, go to Settings > CI/CD and expand the Runners section
    • For a project runner, go to Settings > CI/CD and expand the Runners section

    Optionally enable: Run untagged jobs [x] Indicates whether this runner can pick jobs without tags

    See also Registration token vs. Authentication token

  5. Retrieve a new runner authentication token

    Register a new runner

    curl --request POST "https://gitlab.com/api/v4/runners" --form "token=<your register token>" --form "description=gitlab-runner" --form "tag_list=pepperize,docker,production"
  6. Store runner authentication token in SSM ParameterStore

    Create a String parameter

    aws ssm put-parameter --name "/gitlab-runner/token" --value "<your runner authentication token>" --type "String"
  7. Add to your main.ts

    import { Vpc } from "@aws-cdk/aws-ec2";
    import { App, Stack } from "@aws-cdk/core";
    import { GitlabRunnerAutoscaling } from "@pepperize/cdk-autoscaling-gitlab-runner";
    
    const app = new App();
    const stack = new Stack(app, "GitLabRunnerStack");
    const vpc = Vpc.fromLookup(app, "ExistingVpc", {
      vpcId: "<your vpc id>",
    });
    const token = StringParameter.fromStringParameterAttributes(stack, "Token", {
      parameterName: "/gitlab-runner/token",
    });
    new GitlabRunnerAutoscaling(stack, "GitlabRunner", {
      network: {
        vpc: vpc,
      },
      runners: [
        {
          token: token,
          configuration: {
            // optionally configure your runner
          },
        },
      ],
    });
  8. Create service linked role

    (If requesting spot instances, default: true)

    aws iam create-service-linked-role --aws-service-name spot.amazonaws.com
  9. Configure the AWS CLI

  10. Deploy the GitLab Runner

    npm run deploy

Example

Custom cache bucket

By default, an AWS S3 Bucket is created as GitLab Runner's distributed cache. It's encrypted and public access is blocked. A custom S3 Bucket can be configured:

const cache = new Bucket(this, "Cache", {
  // Your custom bucket
});
const token = StringParameter.fromStringParameterAttributes(stack, "Token", {
  parameterName: "/gitlab-runner/token",
});

new GitlabRunnerAutoscaling(this, "Runner", {
  runners: [
    {
      token: token,
    },
  ],
  cache: { bucket: cache },
});

See example, GitlabRunnerAutoscalingCacheProps

Custom EC2 key pair

By default, the amazonec2 driver will create an EC2 key pair for each runner. To use custom ssh credentials provide a SecretsManager Secret with the private and public key file:

  1. Create a key pair, download the private key file and remember the created key pair name

  2. Generate the public key file

    ssh-keygen -f <the downloaded private key file> -y
    
  3. Create an AWS SecretsManager Secret from the key pair

    aws secretsmanager create-secret --name <the secret name> --secret-string "{\"<the key pair name>\":\"<the private key>\",\"<the key pair name>.pub\":\"<the public key>\"}"
  4. Configure the job runner

    const keyPair = Secret.fromSecretNameV2(stack, "Secret", "CustomEC2KeyPair");
    
    new GitlabRunnerAutoscaling(this, "Runner", {
      runners: [
        {
          keyPair: keyPair,
          configuration: {
            machine: {
              machineOptions: {
                keypairName: "<the key pair name>",
              },
            },
          },
        },
      ],
      cache: { bucket: cache },
    });

Configure Docker Machine

By default, docker machine is configured to run privileged with CAP_SYS_ADMIN to support Docker-in-Docker using the OverlayFS driver and cross compiling/building with multiarch.

See runners.docker section in Advanced configuration

import { GitlabRunnerAutoscaling } from "@pepperize/cdk-autoscaling-gitlab-runner";
import { StringParameter } from "aws-cdk-lib/aws-ssm";

const token = StringParameter.fromStringParameterAttributes(stack, "Token", {
  parameterName: "/gitlab-runner/token",
});

new GitlabRunnerAutoscaling(this, "Runner", {
  runners: [
    {
      token: token,
      configuration: {
        environment: [], // Reset the OverlayFS driver for every project
        docker: {
          capAdd: [], // Remove the CAP_SYS_ADMIN
          privileged: false, // Run unprivileged
        },
        machine: {
          idleCount: 2, // Number of idle machine
          idleTime: 3000, // Waiting time in idle state
          maxBuilds: 1, // Max builds before instance is removed
        },
      },
    },
  ],
});

See example, DockerConfiguration

Bigger instance type

By default, t3.nano is used for the manager/coordinator and t3.micro instances will be spawned. For bigger projects, for example with webpack, this won't be enough memory.

const token = StringParameter.fromStringParameterAttributes(stack, "Token", {
  parameterName: "/gitlab-runner/token",
});

new GitlabRunnerAutoscaling(this, "Runner", {
  manager: {
    instanceType: InstanceType.of(InstanceClass.T3, InstanceSize.SMALL),
  },
  runners: [
    {
      instanceType: InstanceType.of(InstanceClass.T3, InstanceSize.LARGE),
      token: token,
      configuration: {
        // optionally configure your runner
      },
    },
  ],
});

You may have to disable or configure Spot instances

See example, GitlabRunnerAutoscalingManagerProps, GitlabRunnerAutoscalingJobRunnerProps

Different machine image

By default, the latest Amazon 2 Linux will be used for the manager/coordinator. The manager/coordinator instance's cloud init scripts requires yum is installed, any RHEL flavor should work. The requested runner instances by default using Ubuntu 20.04, any OS implemented by the Docker Machine provisioner should work.

const token = StringParameter.fromStringParameterAttributes(stack, "Token", {
  parameterName: "/gitlab-runner/token",
});

new GitlabRunnerAutoscaling(this, "Runner", {
  manager: {
    machineImage: MachineImage.genericLinux(managerAmiMap),
  },
  runners: [
    {
      machineImage: MachineImage.genericLinux(runnerAmiMap),
      token: token,
      configuration: {
        // optionally configure your runner
      },
    },
  ],
});

See example, GitlabRunnerAutoscalingManagerProps, GitlabRunnerAutoscalingJobRunnerProps

Multiple runners configuration

Each runner defines one [[runners]] section in the configuration file. Use Specific runners when you want to use runners for specific projects.

const privilegedRole = new Role(this, "PrivilegedRunnersRole", {
  // role 1
});

const restrictedRole = new Role(this, "RestrictedRunnersRole", {
  // role 2
});

const token1 = StringParameter.fromStringParameterAttributes(stack, "Token1", {
  parameterName: "/gitlab-runner/token1",
});

const token2 = StringParameter.fromStringParameterAttributes(stack, "Token2", {
  parameterName: "/gitlab-runner/token2",
});

new GitlabRunnerAutoscaling(this, "Runner", {
  runners: [
    {
      token: token1,
      configuration: {
        name: "privileged-runner",
      },
      role: privilegedRole,
    },
    {
      token: token2,
      configuration: {
        name: "restricted-runner",
        docker: {
          privileged: false, // Run unprivileged
        },
      },
      role: restrictedRole,
    },
  ],
});

See example, GitlabRunnerAutoscalingProps

Spot instances

By default, EC2 Spot Instances are requested.

const token = StringParameter.fromStringParameterAttributes(stack, "Token", {
  parameterName: "/gitlab-runner/token",
});

new GitlabRunnerAutoscaling(this, "Runner", {
  runners: [
    {
      token: token,
      configuration: {
        machine: {
          machineOptions: {
            requestSpotInstance: false,
            spotPrice: 0.5,
          },
        },
      },
    },
  ],
});

See example, EC2 spot price, MachineConfiguration, MachineOptions, Advanced configuration - runners.machine.autoscaling

Cross-Compile with Multiarch

To build binaries of different architectures can also use Multiarch

const token = StringParameter.fromStringParameterAttributes(stack, "Token", {
  parameterName: "/gitlab-runner/token",
});

new GitlabRunnerAutoscaling(this, "Runner", {
  runners: [
    {
      token: token,
      configuration: {
        docker: {
          privileged: true,
        },
      },
    },
  ],
});

Configure your .gitlab-ci.yml file

build:
  image: multiarch/debian-debootstrap:armhf-buster
  services:
    - docker:stable-dind
    - name: multiarch/qemu-user-static:register
      command:
        - "--reset"
  script:
    - make build

See multiarch/qemu-user-static

Running on AWS Graviton

To run your jobs on AWS Graviton you have to provide an AMI for arm64 architecture.

const token = StringParameter.fromStringParameterAttributes(stack, "Token", {
  parameterName: "/gitlab-runner/token",
});

new GitlabRunnerAutoscaling(this, "Runner", {
  runners: [
    {
      token: token,
      configuration: {
        instanceType: InstanceType.of(InstanceClass.M6G, InstanceSize.LARGE),
        machineImage: MachineImage.genericLinux({
          [this.region]: new LookupMachineImage({
            name: "ubuntu/images/hvm-ssd/ubuntu-focal-20.04-*-server-*",
            owners: ["099720109477"],
            filters: {
              architecture: [InstanceArchitecture.ARM_64],
              "image-type": ["machine"],
              state: ["available"],
              "root-device-type": ["ebs"],
              "virtualization-type": ["hvm"],
            },
          }).getImage(this).imageId,
        }),
      },
    },
  ],
});

See Ubuntu Amazon EC2 AMI Locator

Custom runner's role

To deploy from within your GitLab Runner Instances, you may pass a Role with the IAM Policies attached.

const role = new Role(this, "RunnersRole", {
  assumedBy: new ServicePrincipal("ec2.amazonaws.com", {}),
  inlinePolicies: {},
});
const token = StringParameter.fromStringParameterAttributes(stack, "Token", {
  parameterName: "/gitlab-runner/token",
});

new GitlabRunnerAutoscaling(this, "Runner", {
  runners: [
    {
      role: role,
      token: token,
      configuration: {
        // optionally configure your runner
      },
    },
  ],
});

See example, GitlabRunnerAutoscalingProps

Vpc

If no existing Vpc is passed, a cheap VPC with a NatInstance (t3.nano) and a single AZ will be created.

const natInstanceProvider = aws_ec2.NatProvider.instance({
  instanceType: aws_ec2.InstanceType.of(InstanceClass.T3, InstanceSize.NANO), // using a cheaper gateway (not scalable)
});
const vpc = new Vpc(this, "Vpc", {
  // Your custom vpc, i.e.:
  natGatewayProvider: natInstanceProvider,
  maxAzs: 1,
});

const token = StringParameter.fromStringParameterAttributes(stack, "Token", {
  parameterName: "/gitlab-runner/token",
});

new GitlabRunnerAutoscaling(this, "Runner", {
  runners: [
    {
      token: token,
      configuration: {
        // optionally configure your runner
      },
    },
  ],
  network: { vpc: vpc },
});

See example, GitlabRunnerAutoscalingProps

Zero config

Deploys the Autoscaling GitLab Runner on AWS EC2 with the default settings mentioned above.

Happy with the presets?

const token = StringParameter.fromStringParameterAttributes(stack, "Token", {
  parameterName: "/gitlab-runner/token",
});

new GitlabRunnerAutoscaling(this, "Runner", {
  runners: [
    {
      token: token,
      configuration: {
        // optionally configure your runner
      },
    },
  ],
});

See example, GitlabRunnerAutoscalingProps

ECR Credentials Helper

By default, the GitLab amzonec2 driver will be configured to install the amazon-ecr-credential-helper on the runner's instances.

To configure, override the default job runners environment:

new GitlabRunnerAutoscaling(this, "Runner", {
  runners: [
    {
      // ...
      environment: [
        "DOCKER_DRIVER=overlay2",
        "DOCKER_TLS_CERTDIR=/certs",
        'DOCKER_AUTH_CONFIG={"credHelpers": { "public.ecr.aws": "ecr-login", "<aws_account_id>.dkr.ecr.<region>.amazonaws.com": "ecr-login" } }',
      ],
    },
  ],
});

Projen

This project uses projen to maintain project configuration through code. Thus, the synthesized files with projen should never be manually edited (in fact, projen enforces that).

To modify the project setup, you should interact with rich strongly-typed class AwsCdkTypeScriptApp and execute npx projen to update project configuration files.

In simple words, developers can only modify .projenrc.js file for configuration/maintenance and files under /src directory for development.

See also Create and Publish CDK Constructs Using projen and jsii.