Skip to content

Pipelines

Carson Long edited this page Jan 15, 2021 · 1 revision

Interacting with concourse uses the fly command line executable. Our pipeline is named capi.

Everything is saved in git at https://github.com/cloudfoundry/capi-ci.git, in the file https://github.com/cloudfoundry/capi-ci/blob/master/ci/pipeline.yml

Authorization

Before doing anything, one must log in:

fly -t capi login
# follow the instructions

If this causes an unknown target error, run fly -t capi login -c https://capi.ci.cf-app.com to set up the capi alias.

View current pipelines

fly -t capi pipelines

Updating a pipeline

After updating pipeline.yml, it must be uploaded to our concourse installation:

set_capi_pipeline

Under the hood, set_capi_pipeline does the following:

fly -t capi set-pipeline --pipeline=capi \
  --config="${HOME}/workspace/capi-ci/ci/pipeline.yml" \
  --load-vars-from="${HOME}/workspace/capi-ci-private/ci/credentials.yml"

Rebuilding an environment

Sometimes an environment builds up too much local state and builds start to fail unaccountably. To rebuild the environment, first cd and direnv allow the capi-ci-private/ home of the environment. Once there, run:

bosh deployments # verify that there's a cf deployment there
bosh delete-deployment -d cf 

Once the deletion finishes (somewhere between half and one hour), simply press the "+" button for the environment. We only need to build a new bosh/update-bosh-ENVIRONMENT when we want to deploy a new bosh director.

Intercepting a container

Containers used for recent builds stick around for some amount of time. One may ssh into one of these containers to wreak havoc and sow discord, or at least to run commands in the context of Concourse to debug problems.

fly -t capi containers

Assuming a response as follows:

handle       worker                                pipeline  job                   build #  build id  type   name                                     attempt
eihnbu8id8k  afdc510a-8fb3-4209-b9b0-38df5c9a43d1  capi      cc-unit-tests         302      4696      task   run-cc-unit-tests-postgres               n/a

One can intercept the container like so:

fly -t capi intercept -j capi/cc-unit-tests -b 302 -s run-cc-unit-tests-postgres

Updating our Concourse deployment

Our concourse is deployed by... concourse! It runs in the Wasabi environment.

The pipeline that deploys concourse lives here: https://github.com/cloudfoundry/capi-ci/blob/master/ci/pipeline-concourse.yml

You may upload new configuration as follows:

fly -t capi set-pipeline --pipeline=concourse \
  --config="${HOME}/workspace/capi-ci/ci/pipeline-concourse.yml" \
  --load-vars-from="${HOME}/workspace/capi-ci-private/ci/credentials.yml"

Adding slack notifications to a task

- name: do-something
  plan:
  - aggregate:
    - get: something-else
      on_failure:
        put: slack-alert
          params:
            text: '[do-something] OH NO! I did not! https://capi.ci.cf-app.com/?groups=can-we-bump'
            icon_emoji: {{slack_failure_emoji}}
  - task: doing-something
    file: capi-ci/ci/scripts/doing-something.yml
    on_failure:
      put: slack-alert
      params:
        text: '[do-something] OH NO! I did nothing! boshes https://capi.ci.cf-app.com/?groups=bosh'
        icon_emoji: {{slack_failure_emoji}}

Executing jobs in ad/hoc containers

Sometimes you would like to execute pipeline work without changing the pipeline itself. For instance, if you are testing changes to a script that finds its way into a pipeline via git inputs, it can be frustrating to make the change by trial and error, seeing failure only after committing changes that break the pipeline.

Fly has an execute command that allows you to run arbitrary commands with inputs from the local file system.

ENV_PARAM_TO_IMPORT=foo \
  fly -t capi execute \
    --config ~/workspace/capi-ci/ci/scripts/check_can_i_bump.yml \
    --inputs-from capi/bump-cf-release \
    --input capi-ci=${HOME}/workspace/capi-ci -p

This grabs all of the inputs from the job named bump-cf-release already defined in the pipeline. It's a nice way to grab inputs that might be difficult to import from the local file system. You can then override some inputs from the local file system, using the --input parameter. Use -p to run commands in the container as root.

Any environment variables in the current shell will be imported into the container, so if your script depends on specific variables imported from the pipeline or from our secrets repository, set them on the command line to make the task work.

Clone this wiki locally