Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using shell command in docker-compose.yml #4081

Closed
zkanda opened this issue Oct 27, 2016 · 69 comments
Closed

Using shell command in docker-compose.yml #4081

zkanda opened this issue Oct 27, 2016 · 69 comments

Comments

@zkanda
Copy link

zkanda commented Oct 27, 2016

Hello, is there a way to use shell commands in docker-compose.yml file?
Here is my use case:

 version: '2'
 services:
   ci:
     image: jenkins
     volumes:
       - ./data:/var/jenkins_home
       - /var/run/docker.sock:/var/run/docker.sock
       - $(command -v docker):/usr/bin/docker
     groupadd:
       - $(stat -c %g /var/run/docker.sock)
     ports:
       - "8080:8080"
       - "50000:50000"

Currently it's giving me this error:

ERROR: Invalid interpolation format for "volumes" option in service "ci": "${command -v docker}:/usr/bin/docker"
@shin-
Copy link

shin- commented Oct 28, 2016

Hi @zkanda ,

Sorry, this is not something we support. Usually, this is done by setting environment variables and using variable substitution inside the Compose file instead.

@zkanda
Copy link
Author

zkanda commented Dec 8, 2016

@shin- thanks, I can workaround with environment variables. Also .env seems very useful as well.
Relevant docs: https://docs.docker.com/compose/environment-variables/#/the-env-file

@zkanda zkanda closed this as completed Dec 8, 2016
@dtothefp
Copy link

@zkanda did you ever get this to work, I tried in my .env file

DOCKER_BIN=`which docker`

and then in the docker-compose.yml

jenkinsmaster:
  build: jenkins-master
  volumes:
    - /var/run/docker.sock:/var/run/docker.sock
    - ${DOCKER_BIN}:/usr/bin/docker
  ports:
    - "50000:50000"

But I keep getting

Cannot create container for service jenkinsmaster: create `which docker`: "`which docker`" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed.

So it looks like commands are interpreted as strings in the .env file?

@shin-
Copy link

shin- commented Feb 28, 2017

Commands are not expanded by Compose, in .env or elsewhere.

@marzlarz
Copy link

marzlarz commented Jun 7, 2018

This would be very useful to have ....

@CpuID
Copy link

CpuID commented Jul 16, 2018

+1

@jeffijoe
Copy link

Here's a use case. When setting up Kafka, it needs the IP of the host machine. I have the following script to get that for me:

DOCKER_HOST_IP=$(ifconfig | grep 'inet .*br' | sed -E 's/.*inet (.*) netmask.*/\1/')

I reference that IP in the env. I can't just run docker-compose up anymore because I need to run that beforehand. And I need to inform the other devs on the team of this.

@bfwg
Copy link

bfwg commented Oct 13, 2018

+1

@davi5e
Copy link

davi5e commented Oct 17, 2018

Would be useful to get the current user, as in:

services:
  foo:
    image: bar:latest
    user: ${CURRENT_USER-$(id -u):$(id -g)}

There must be other use cases too...

By the way, I don't get the fact that ${BAZ-default} works in Compose if we can't execute the remainder of the Bash functionalities... (as far as I know, the - is one of Bash's variable manipulation functionalities, as substitution with ${VAR/search/replace}, for example)

@binarymist
Copy link

Yes @davi5e ... What I was trying for:

version: "3.7"

services:
  orchestrator:
    build: .
    ports:
      - "2000:2000"
    # Use type host until we can test link
    network_mode: "host"
    environment:
      - NODE_ENV=development
      - LOCAL_USER_ID=${id -u}

@Baathus
Copy link

Baathus commented Nov 6, 2018

@davi5e another use case is when you want to create a docker volume and link it to your git repository directory. This absolute bath will wary for each developer on the team.

@sido420
Copy link

sido420 commented Nov 7, 2018

+1
Plz reopen the issue

@sarkortrantor
Copy link

+1
my use case it to publish a range of ports that is determined as a function of a "base" port

@moacode
Copy link

moacode commented Dec 10, 2018

This would be very useful to have for setting UID/GID variables as this is pretty much a requirement for using Docker in a development environment (without needed to use scripts to set this into an .env file).

@con-f-use
Copy link

con-f-use commented Dec 24, 2018

+1

Could use this to label/tag the images generated by docker compose with the git tag and commit hash:

In .env:

GIT_VERSION=$(git describe --always --dirty --abbrev)
OUTER_PORT=6970

In docer-complse.yml:

version: '3'
services:
  nginx:
    restart: always
    build:
        context: ./nginx
        labels:
          org.label-schema.schema-version: "1.0"
          org.label-schema.version: "${GIT_VERSION}"
          org.lavel.schema.url: "https://mydocu-server.company.com/vcs/${GIT_VERSION}"
    ports:
      - ${OUTER_PORT}:8080

@ackerleytng
Copy link

For those doing this in development, if your shell sets the UID variable, you can pass that in when building the local development image. Since the local development image is not shared, you don't have to worry about uid conflicts with your co-workers.

@laflaneuse
Copy link

+1

@Hultner
Copy link

Hultner commented May 29, 2019

@con-f-use Doing that will just send the actual command and not the result from my testing, the only hack I've been successful so far is writing a script wrapping docker compose which sets up the environment variable.

@juliojmphjv
Copy link

+1

@wadkar
Copy link

wadkar commented Jun 28, 2019

TL;DR if you want to export those variables from .env file:

# set the path of .env file here
ENV_FILE="${ENV_FILE:-local.env}"
while IFS= read -r line; do
  export "$line"
done < <( grep --color=never -E -v -e '^#' -e '^[[:space:]]*$' "${ENV_FILE}" )

How to use:

  • Type it on your command line and press enter to modify your current shell environment

OR

  • Save it as bash script and source it

Caveats:

  • Saving this as a script and running it won't magically export those variables in your current shell environment. You'll have to source it or do some fancy eval.
  • (Maybe a good thing?) It will overwrite existing variables with same name in $ENV_FILE

Explanation: here

@mbrevda
Copy link

mbrevda commented Jun 28, 2019 via email

@wadkar
Copy link

wadkar commented Jun 28, 2019

Yeah, direnv is nice but I am not sure if it will be useful with Docker's .env file which has simple lines with plain VAR=VAL syntax (as per this link.) The idea being .env file is committed to the repository and the build script can run it on my local or docker-compose can run it on the staging/deploy.

@mbrevda
Copy link

mbrevda commented Jun 28, 2019 via email

@wadkar
Copy link

wadkar commented Jun 28, 2019

Oh, okay.

I like to think that the .env file turns into environment variable when you execute docker-compose up.

My use case was to use the same .env file as the source of truth to run a local dev instance (possibly without help of docker-compose). So yeah, maybe those lines in the .env file aren't really environment variables, but they surely turn into one when you deploy.

I still don't see how direnv is going to help in that case.

@sido420
Copy link

sido420 commented Aug 10, 2019

Another use case is to put a command that generates a password assigning it to env var in the container.
This way the pasword would actually not be written in .env or docker-compose.yml files.

I think this is going to be a huge benefit for many security related use cases.

@jackwil1
Copy link

+1

1 similar comment
@stat1x
Copy link

stat1x commented Sep 26, 2019

+1

@omoustaouda
Copy link

+1, would be hugely useful for setting uid and gid without over-complicating the solution

 /\_/\
( o.o )
 > ^ <```

@haneef95
Copy link

Yup, that's what I'm having to do now as well, wrapping it with a shell script.

@arulrajnet
Copy link

I have a workaround with direnv https://direnv.net/

.envrc file

export GIT_COMMIT=$(git log -1 --format=%h)
export GIT_FULL_COMMIT=$(git rev-list -1 HEAD)
export GIT_DATE=$(git log -1 --date=short --pretty=format:%ct)

Then docker-compose.yml file

version: '2.4'

services:

  test:
    build:
      context: .
      args:
        # GIT_COMMIT load from .envrc via direnv
        GIT_COMMIT: ${GIT_COMMIT:-unspecified}
    image: arulrajnet/test:dev

Then docker-compose build

Inspired from the comment of @mbrevda

@binarymist
Copy link

direnv looks very cool @arulrajnet . Not sure how I'm only learning about it's existence now though?

@bo5o
Copy link

bo5o commented Dec 27, 2021

I use a similar workaround to the one described by @arulrajnet with some additions.

I have environment variables that depend on the current state of a git repo, e.g. something like APP_VERSION.
I usually set this to the output of git describe --tags, which will resolve to the name of the current tag if HEAD is exactly on that tag, otherwise it will resolve to a string like v1.0.0-23-f832r9c that tells you where you are in git history relative to the last tag (in this case, 23 commits ahead of tag v1.0.0 on commit f832r9c). I can then use it in docker-compose.yml, e.g. as a build argument.

Up to this point, .envrc looks like this

export APP_VERSION=$(git describe --tags)

The problem is that the variable gets stale whenever you switch between git branches or commit new changes. If you don't want to remember to run direnv reload every time, you can add the following to .envrc

export APP_VERSION=$(git describe --tags)

watch_file .git/HEAD
watch_file .git/refs/heads/$(git branch --show-current)

This will reload the environment when you switch branches or commit to the currently checked out branch. You can also apply different behavior for different branches. For example, to add ...-feature suffix to APP_VERSION only on feature branches.

if on_git_branch -r 'feature.*'; then
    export APP_VERSION=$(git describe --tags)-feature
else
    export APP_VERSION=$(git describe --tags)
fi

watch_file .git/refs/heads/$(git branch --show-current)

Note: on_git_branch -r ... implies watch_file .git/HEAD, so you don't need to set it explicity.

@ErikOrjehag
Copy link

Would be nice to open and tag as a feature request instead of question

@dmikushin
Copy link

This likely won't get far enough, unfortunately. The problem is that different shells may use different inline commands syntax. For example, bash uses $(id), whereas fish omits '$' and needs just (id). And this is the simplest case, let's not talk about cmd, PowerShell and friends. So I've decided to stay with a shell-specific startup script, which seems to be a reasonable compromise:

> cat compose.sh
#!/bin/bash
export ID=$(id)
docker-compose up -d

> cat docker-compose.yml | grep ID -C 1
    environment:
      - ID=${ID}
    restart: unless-stopped

Then I start my containers as ./compose.sh.

@Bonifatius94
Copy link

So I've decided to stay with a shell-specific startup script, which seems to be a reasonable compromise

Why do you think this is a good compromise? Docker-Compose files are supposed to hide away leaking deployment details from the calling scope; that's the whole purpose of this technology. If I need to inline-specify a variable right before "docker-compose up", I'm adding a lot of unnecessary complexity.

Why can't Docker-Compose inline-evaluate environment variables before it passes them to a container? What is so difficult about this? I mean just give it a command as string and run it. The user has to think about whether his console supports the command. Skilled cloud devs capable of figuring out how to do proper IaC should be able to constrain their deployment target to a set of supported host systems and consoles without any problem. That's really not an argument. Btw, you're constraining the deployment as well by using a *.sh file such that it won't run on Windows / Powershell, so basically nothing is gained by your shell script approach; it's just tedious and overly complex 😂

I can see that evaulating a command on the host system can introduce crazy command injection security holes. But for the reasons you've given, it doesn't make any sense to hold the feature back.

@robertlagrant
Copy link

This is something I'd love as well. We have some monorepo-wide settings I'd like to keep in a single shell script, and just source it everywhere. Then I can update in one place and the change will ripple through. Another requirement I have is a version command, which I think someone else had too.

The alternative of .env files isn't too helpful as many git repos explicitly .gitignore them, and the settings I want to do this with are important for building reproducably.

@antoniogamizbadger
Copy link

Coming here looking for the same feature :(

@liiight
Copy link

liiight commented Jun 15, 2023

FWIW I use a combination of Taskfile and multiple docker-compose.yml files. My use case sets different environment variables depending on the current arch:

version: '3'
tasks:
  docker:compose:
    desc: Run a docker compose command
    summary: |
      Sets the base command that is needed to run a docker compose command.
      Note that this uses both 'docker-compose.yml' and 'docker-compose-{{ARCH}}.yml' which is determined by the host ARCH.
      The order of these inclusion matter and the latter can override the earlier.
      
      https://docs.docker.com/compose/reference/#use--f-to-specify-name-and-path-of-one-or-more-compose-files
    internal: true
    cmd: >
      docker compose
      -f docker-compose.yml
      -f docker-compose-{{ARCH}}.yml
      {{.COMPOSE_COMMAND}}

    preconditions:
      - sh: docker compose version
        msg: Could not connect to docker server!

Then I have docker-compose.yml, docker-compose-amd64.yml & docker-compose-arm64.yml in my repo which override the relevant configuration based on the user platform.

@illysky
Copy link

illysky commented Jun 19, 2023

Yes please

@bubbleguuum
Copy link

This is something that is really needed to make the docker-compose.yml file self sufficient.

@ndeloof
Copy link
Contributor

ndeloof commented Feb 11, 2024

@bubbleguuum how would you manage shell script portability? Both cross-OS and regarding installed commands on user's workstation?

@bubbleguuum
Copy link

bubbleguuum commented Feb 11, 2024

I suppose if it has not been done it is because of such complications. I would say that if there is a need for some environment variables whose value is dynamically defined with shell commands, it is not portable anyway if not running on the system it is meant to run on. So you might as well run these commands as part of docker compose to make it easier and self-contained.

@ndeloof
Copy link
Contributor

ndeloof commented Feb 11, 2024

I would say that if there is a need for some environment variables whose value is dynamically defined with shell commands, it is not portable anyway

It could be made portable if such a command can be container-ized, as sort of a pre-up hook / init-container / to-be-defined. Would this be useful for you own usage ?

@bubbleguuum
Copy link

My usage is quite basic and similar to other posts. It would be for being able to write:

    user: $(id -u):$(id -g)
    group_add:
      - $(stat -c %g /dev/dri/renderD128)

rather than to have to use env variables for these, which works just fine, but an extra step.
It would be just something nice to have. Not sure about the best technical solution for it though.

@ndeloof
Copy link
Contributor

ndeloof commented Feb 11, 2024

so IIUC the original issue is that you need access to a GPU device, and get into permission issue doing so?

@bubbleguuum
Copy link

bubbleguuum commented Feb 11, 2024

The group_add statement is for GPU access yes on Linux distros that set /dev/dri/render128 with permission 660, for being able to access the GPU when running the container under an unprivileged user.
The user statement is not related to GPU, just for running the container under the same uid/gid as the user running the container. In my case, it is so generated files by the container workload in a mounted volume have the proper permissions for the user to access them in the host (a common case).

@ndeloof
Copy link
Contributor

ndeloof commented Feb 11, 2024

ok, have you considered using gpu capabilities to let container access GPU device ?
for your second use-case, also see #7853

@bubbleguuum
Copy link

bubbleguuum commented Feb 11, 2024

I have not looked at the gpu capabilities yet, as I am not much more familiar invoking docker on the command-line rather than docker-compose which is still quite new to me.
For the user thing, I suppose I'm already doing what #7853 suggests, that is, using an environment variable whose value is $(id -u):$(id -g)

@bubbleguuum
Copy link

bubbleguuum commented Feb 11, 2024

The gpu capabilities thing (in a deploy statement) seems to only apply to NVIDIA for which it works fine (tested in the container I am working on). It is more or less equivalent to the --runtime nvidia and --gpus docker command-line options. /dev/dri/renderD128 access is only required for other GPUs (AMD, Intel) for use with VA-API (Intel, AMD) and QSV (Intel only, eventually performed by the CPU).

@ndeloof
Copy link
Contributor

ndeloof commented Feb 11, 2024

ok. In addition to portability consideration, one challenge to address with this approach is that you assume here docker compose .. command is ran on same machine the container will run. By nature, the docker engine API is remote, so this could bring to weird/unexpected behaviors.

@mirca-milanov-igt-com
Copy link

mirca-milanov-igt-com commented Feb 25, 2024

I don't know if this is resolved, but I would find it really convenient if I could just set env variable as:
TZ=$(cat /etc/timezone)
Instead of setting it manually or using some third party templating.

Edit: I tried to follow instruction from: #7843, but it throws error for:
environment: - "TZ:${TIMEZONE~(cat /etc/timezone)}"

@martinVyoo
Copy link

martinVyoo commented Mar 28, 2024

ok. In addition to portability consideration, one challenge to address with this approach is that you assume here docker compose .. command is ran on same machine the container will run. By nature, the docker engine API is remote, so this could bring to weird/unexpected behaviors.

can't docker compose evaluate the commands on the machine where it runs before sending the configuration to the engine, remote or no?

@con-f-use
Copy link

con-f-use commented Mar 28, 2024

Edit: I tried to follow instruction from: #7843, but it throws error for

Yeah, there's not changeset there and was closed without a merge by the author. I don't think that ever made it in the codebase.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.