Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Forward ssh key agent into container #6396

Open
phemmer opened this issue Jun 13, 2014 · 191 comments
Open

Forward ssh key agent into container #6396

phemmer opened this issue Jun 13, 2014 · 191 comments
Labels
area/security exp/expert exp/intermediate kind/feature Functionality or other elements that the project doesn't currently have. Features are new and shiny

Comments

@phemmer
Copy link
Contributor

phemmer commented Jun 13, 2014

It would be nice to be able to forward an ssh key agent into a container during a run or build.
Frequently we need to build source code which exists in a private repository where access is controlled by ssh key.

Adding the key file into the container is a bad idea as:

  1. You've just lost control of your ssh key
  2. Your key might need to be unlocked via passphrase
  3. Your key might not be in a file at all, and only accessible through the key agent.

You could do something like:

# docker run -t -i -v "$SSH_AUTH_SOCK:/tmp/ssh_auth_sock" -e "SSH_AUTH_SOCK=/tmp/ssh_auth_sock" fedora ssh-add -l
2048 82:58:b6:82:c8:89:da:45:ea:9a:1a:13:9c:c3:f9:52 phemmer@whistler (RSA)

But:

  1. This only works for docker run, not build.
  2. This only works if the docker daemon is running on the same host as the client.

 

The ideal solution is to have the client forward the key agent socket just like ssh can.
However the difficulty in this is that it would require the remote API build and attach calls to support proxying an arbitrary number of socket streams. Just doing a single 2-way stream wouldn't be sufficient as the ssh key agent is a unix domain socket, and it can have multiple simultaneous connections.

@SvenDowideit
Copy link
Contributor

I wonder if #6075 will give you what you need

@phemmer
Copy link
Contributor Author

phemmer commented Jun 17, 2014

A secret container might make it a little bit safer, but all the points mentioned still stand.

@slmingol
Copy link

+1 I would find this capability useful as well. In particular when building containers that require software from private git repos, for example. I'd rather not have to share a repo key into the container, and instead would like to be able to have the "docker build ..." use some other method for gaining access to the unlocked SSH keys, perhaps through a running ssh-agent.

@jbiel
Copy link
Contributor

jbiel commented Aug 2, 2014

+1. I'm just starting to get my feet wet with Docker and this was the first barrier that I hit. I spent a while trying to use VOLUME to mount the auth sock before I realized that docker can't/won't mount a host volume during a build.

I don't want copies of a password-less SSH key lying around and the mechanics of copying one into a container then deleting it during the build feels wrong. I do work within EC2 and don't even feel good about copying my private keys up there (password-less or not.)

My use case is building an erlang project with rebar. Sure enough, I could clone the first repo and ADD it to the image with a Dockerfile, but that doesn't work with private dependencies that the project has. I guess I could just build the project on the host machine and ADD the result to the new Docker image, but I'd like to build it in the sandbox that is Docker.

Here are some other folks that have the same use-case: https://twitter.com/damncabbage/status/453347012184784896

Please, embrace SSH_AUTH_SOCK, it is very useful.

Thanks

Edit: Now that I know more about how Docker works (FS layers), it's impossible to do what I described in regards to ADDing an SSH key during a build and deleting it later. The key will still exist in some of the FS layers.

@arunthampi
Copy link

+1, being able to use SSH_AUTH_SOCK will be super useful!

@razic
Copy link

razic commented Sep 3, 2014

I use SSH keys to authenticate with Github, whether it's a private repository or a public one.

This means my git clone commands looks like: git clone git@github.com:razic/my-repo.git.

I can volume mount my host ~/.ssh directory into my containers during a docker run and ssh is all good. I cannot however mount my ~/.ssh during a docker build.

@bruce
Copy link

bruce commented Sep 13, 2014

👍 for ssh forwarding during builds.

@puliaiev
Copy link

As I understand this is wrong way. Right way is create docker image in dev machine, and than copy it to docker server.

@slmingol
Copy link

@SevaUA - no that's not correct. This request is due to a limitation when doing docker build.... You cannot export a variable into this stage like you can when doing a docker run .... The run command allows variables to be export into the docker container while running, whereas the build does not allow this. This limitation is partially intentional based on how dockerd works when building containers. But there are ways around this and the usecase that is described is a valid one. So this request is attempting to get this capability implemented in build, in some fashion.

@kanzure
Copy link

kanzure commented Oct 17, 2014

I like the idea of #6697 (secret store/vault), and that might work for this once it's merged in. But if that doesn't work out, an alternative is to do man-in-the-middle transparent proxying ssh stuff outside of the docker daemon, intercepting docker daemon traffic (not internally). Alternatively, all git+ssh requests could be to some locally-defined host that transparently proxies to github or whatever you ultimately need to end up at.

@phemmer
Copy link
Contributor Author

phemmer commented Oct 17, 2014

That idea has already been raised (see comment 2). It does not solve the issue.

@nodefourtytwo
Copy link

+1 for ssh forwarding during builds.

@goloroden
Copy link

+1 on SSH agent forwarding on docker build

@sabind
Copy link

sabind commented Nov 19, 2014

+1 for ssh forwarding during build for the likes of npm install or similar.

@paulodeon
Copy link

Has anyone got ssh forwarding working during run on OSX? I've put a question up here: http://stackoverflow.com/questions/27036936/using-ssh-agent-with-docker/27044586?noredirect=1#comment42633776_27044586 it looks like it's not possible with OSX...

@thomasdavis
Copy link

+1 =(

@kevzettler
Copy link

Just hit this roadblock as well. Trying to run npm install pointed at a private repo. setup looks like:
host -> vagrant -> docker can ssh-agent forward host -> vagrant -! docker

@patra04
Copy link

patra04 commented Jan 2, 2015

+1
Just hit this while trying to figure out how to get ssh agent working during 'docker build'.

@igreg
Copy link

igreg commented Jan 16, 2015

+1 same as the previous guys. Seems the best solution to this issue when needing to access one or more private git repositories (think bundle install and npm install for instance) when building the Docker image.

@tonivdv
Copy link

tonivdv commented Jan 29, 2015

I can volume mount my host ~/.ssh directory into my containers during a docker run and ssh is all good.

@razic Can you share how you get that working? Because when I tried that before it did complain about "Bad owner or permissions"

Unless you make sure that all containers run with a specific user or permissions which allows you to do that?

@jfarrell
Copy link

+1 to SSH_AUTH_SOCK

@md5
Copy link
Contributor

md5 commented Jan 29, 2015

@tonivdv have a look at the docker run command in the initial comment on this issue. It bind mounts the path referred to by SSH_AUTH_SOCK to /tmp/ssh_auth_sock inside the container, then sets the SSH_AUTH_SOCK in the container to that path.

@KyleJamesWalker
Copy link

@md5 I assume @razic and @tonivdv are talking about mounting like this: -v ~/.ssh:/root/.ssh:ro, but when you do this the .ssh files aren't owned by root and therefore fail the security checks.

@tonivdv
Copy link

tonivdv commented Jan 29, 2015

@KyleJamesWalker yup that's what I understand from @razic and which was one of my attempts some time ago, so when I read @razic was able to make it work, I was wondering how :)

@KyleJamesWalker
Copy link

@tonivdv I'd also love to know if it's possible, I couldn't find anything when I last tried though.

@dragon788
Copy link
Contributor

if you are using docker run you should mount your .ssh with --mount type=bind,source="${HOME}/.ssh/",target="/root/.ssh/",readonly. The readonly is the magic, it masks the normal permissions and ssh basically sees 0600 permissions which it is happy with. You can also play with -u root:$(id -u $USER) to have the root user in the container write any files it creates with the same group as your user so hopefully you can at least read them if not fully write them without having to chmod/chown.

@benton
Copy link

benton commented Oct 13, 2017

Finally.

I believe this problem can now be solved using just docker build, by using multi-stage builds.
Just COPY or ADD the SSH key or other secret wherever you need it, and use it in RUN statements however you like.

Then, use a second FROM statement to start a new filesystem, and COPY --from=builder to import some subset of directories that don't include the secret.

(I have not actually tried this yet, but if the feature works as described...)

@Sodki
Copy link

Sodki commented Oct 13, 2017

@benton multi-stage builds work as described, we use it. It's by far the best option for many different problems, including this one.

@benton
Copy link

benton commented Oct 13, 2017

I have verified the following technique:

  1. Pass the location of a private key as a Build Argument, such as GITHUB_SSH_KEY, to the first stage of a multi-stage build
  2. Use ADD or COPY to write the key to wherever it's needed for authentication. Note that if the key location is a local filesystem path (and not a URL), it must not be in the .dockerignore file, or the COPY directive will not work. This has implications for the final image, as you'll see in step 4...
  3. Use the key as needed. In the example below, the key is used to authenticate to GitHub. This also works for Ruby's bundler and private Gem repositories. Depending on how much of the codebase you need to include at this point, you may end up adding the key again as a side-effect of using COPY . or ADD ..
  4. REMOVE THE KEY IF NECESSARY. If the key location is a local filesystem path (and not a URL), then it is likely that it was added alongside the codebase when you did ADD . or COPY . This is probably precisely the directory that's going to be copied into the final runtime image, so you probably also want to include a RUN rm -vf ${GITHUB_SSH_KEY} statement once you're done using the key.
  5. Once your app is completely built into its WORKDIR, start the second build stage with a new FROM statement, indicating your desired runtime image. Install any necessary runtime dependencies, and then COPY --from=builder against the WORKDIR from the first stage.

Here's an example Dockerfile that demonstrates the above technique. Providing a GITHUB_SSH_KEY Build Argument will test GitHub authentication when building, but the key data will not be included in the final runtime image. The GITHUB_SSH_KEY can be a filesystem path (within the Docker build dir) or a URL that serves the key data, but the key itself must not be encrypted in this example.

########################################################################
# BUILD STAGE 1 - Start with the same image that will be used at runtime
FROM ubuntu:latest as builder

# ssh is used to test GitHub access
RUN apt-get update && apt-get -y install ssh

# The GITHUB_SSH_KEY Build Argument must be a path or URL
# If it's a path, it MUST be in the docker build dir, and NOT in .dockerignore!
ARG GITHUB_SSH_KEY=/path/to/.ssh/key

  # Set up root user SSH access for GitHub
ADD ${GITHUB_SSH_KEY} /root/.ssh/id_rsa

# Add the full application codebase dir, minus the .dockerignore contents...
# WARNING! - if the GITHUB_SSH_KEY is a file and not a URL, it will be added!
COPY . /app
WORKDIR /app

# Build app dependencies that require SSH access here (bundle install, etc.)
# Test SSH access (this returns false even when successful, but prints results)
RUN ssh -o StrictHostKeyChecking=no -vT git@github.com 2>&1 | grep -i auth

# Finally, remove the $GITHUB_SSH_KEY if it was a file, so it's not in /app!
# It can also be removed from /root/.ssh/id_rsa, but you're probably not going
# to COPY that directory into the runtime image.
RUN rm -vf ${GITHUB_SSH_KEY} /root/.ssh/id*

########################################################################
# BUILD STAGE 2 - copy the compiled app dir into a fresh runtime image
FROM ubuntu:latest as runtime
COPY --from=builder /app /app

It might be safer to pass the key data itself in the GITHUB_SSH_KEY Build Argument, rather than the location of the key data. This would prevent accidental inclusion of the key data if it's stored in a local file and then added with COPY .. However, this would require using echo and shell redirection to write the data to the filesystem, which might not work in all base images. Use whichever technique is safest and most feasible for your set of base images.

@omarabid
Copy link

@jbiel Another year, and the solution I found is to use something like Vault.

@z-vr
Copy link

z-vr commented Oct 24, 2017

Here's a link with 2 methods (squash and intermediate container described earlier by @benton)

@kommunicate
Copy link

I'm just adding a note to say that neither of the current approaches will work if you have a passphrase on the ssh key you're using since the agent will prompt you for the passphrase whenever you perform the action that requires access. I don't think there's a way around this without passing around the key phrase (which is undesirable for a number of reasons)

@kinnalru
Copy link

kinnalru commented Nov 30, 2017

Solving.
Create bash script(~/bin/docker-compose or like):

#!/bin/bash

trap 'kill $(jobs -p)' EXIT
socat TCP-LISTEN:56789,reuseaddr,fork UNIX-CLIENT:${SSH_AUTH_SOCK} &

/usr/bin/docker-compose $@

And in Dockerfile using socat:

...
ENV SSH_AUTH_SOCK /tmp/auth.sock
...
  && apk add --no-cache socat openssh \
  && /bin/sh -c "socat -v UNIX-LISTEN:${SSH_AUTH_SOCK},unlink-early,mode=777,fork TCP:172.22.1.11:56789 &> /dev/null &" \
  && bundle install \
...
or any other ssh commands will works

Then run docker-compose build

@tnguyen14
Copy link

@benton why do you use RUN rm -vf ${GITHUB_SSH_KEY} /root/.ssh/id*? Shouldn't it just be RUN rm -vf /root/.ssh/id*? Or maybe I misunderstood the intent here.

@Jokero
Copy link

Jokero commented Dec 1, 2017

@benton And also it's not safe to do:

RUN ssh -o StrictHostKeyChecking=no -vT git@github.com 2>&1

You have to check fingerprint

@zeayes
Copy link

zeayes commented Aug 13, 2018

I sovled this problem by this way

ARGS USERNAME
ARGS PASSWORD
RUN git config --global url."https://${USERNAME}:${PASSWORD}@github.com".insteadOf "ssh://git@github.com"

then build with

docker build --build-arg USERNAME=use --build-arg PASSWORD=pwd. -t service

But at first, your private git server must support username:password clone repo.

@kinnalru
Copy link

@zeayes RUN command stored in container history. So your password is visible to other.

@thaJeztah
Copy link
Member

thaJeztah commented Aug 13, 2018

Correct; when using --build-arg / ARG, those values will show up in the build history. It is possible to use this technique if you use multi-stage builds and trust the host on which images are built (i.e., no untrusted user has access to the local build history), and intermediate build-stages are not pushed to a registry.

For example, in the following example, USERNAME and PASSWORD will only occur in the history for the first stage ("builder"), but won't be in the history for the final stage;

FROM something AS builder
ARG USERNAME
ARG PASSWORD
RUN something that uses $USERNAME and $PASSWORD

FROM something AS finalstage
COPY --from= builder /the/build-artefacts /usr/bin/something

If only the final image (produced by "finalstage") is pushed to a registry, then USERNAME and PASSWORD won't be in that image.

However, in the local build cache history, those variables will still be there (and stored on disk in plain text).

The next generation builder (using BuildKit) will have more features, also related to passing build-time secrets; it's available in Docker 18.06 as an experimental feature, but will come out of experimental in a future release, and more features will be added (I'd have to check if secrets/credentials are already possible in the current version)

@thaJeztah thaJeztah reopened this Aug 13, 2018
@zeayes
Copy link

zeayes commented Aug 13, 2018

@kinnalru @thaJeztah thx, i use multi-stage builds, but the password can be seen in the cache container's history, thx!

@thaJeztah
Copy link
Member

@zeayes Oh! I see I did a copy/paste error; last stage must not use FROM builder ... Here's a full example; https://gist.github.com/thaJeztah/af1c1e3da76d7ad6ce2abab891506e50

@cowlicks
Copy link

cowlicks commented Oct 21, 2018

This comment by @kinnalru is the right way to do this #6396 (comment)

With this method, docker never handles your private keys. And it also works today, without any new features being added.

It took me a while to figure it out, so here is a more clear, and improved explanation. I changed @kinnalru code to use --network=host and localhost, so you don't need to know your ip address. (gist here)

This is docker_with_host_ssh.sh, it wraps docker and forwards SSH_AUTH_SOCK to a port on localhost:

#!/usr/bin/env bash

# ensure the processes get killed when we're done
trap 'kill $(jobs -p)' EXIT

# create a connection from port 56789 to the unix socket SSH_AUTH_SOCK (which is used by ssh-agent)
socat TCP-LISTEN:56789,reuseaddr,fork UNIX-CLIENT:${SSH_AUTH_SOCK} &
# Run docker
# Pass it all the command line args ($@)
# set the network to "host" so docker can talk to localhost
docker $@ --network='host'

In the Dockerfile we connect over localhost to the hosts ssh-agent:

FROM python:3-stretch

COPY . /app
WORKDIR /app

RUN mkdir -p /tmp

# install socat and ssh to talk to the host ssh-agent
RUN  apt-get update && apt-get install git socat openssh-client \
  # create variable called SSH_AUTH_SOCK, ssh will use this automatically
  && export SSH_AUTH_SOCK=/tmp/auth.sock \
  # make SSH_AUTH_SOCK useful by connecting it to hosts ssh-agent over localhost:56789
  && /bin/sh -c "socat UNIX-LISTEN:${SSH_AUTH_SOCK},unlink-early,mode=777,fork TCP:localhost:56789 &" \
  # stuff I needed my ssh keys for
  && mkdir -p ~/.ssh \
  && ssh-keyscan gitlab.com > ~/.ssh/known_hosts \
  && pip install -r requirements.txt

Then you can build your image by invoking the script:

$ docker_with_host_ssh.sh build -f ../docker/Dockerfile .

@thaJeztah
Copy link
Member

@cowlicks you may be interested in this pull request, which adds support for docker build --ssh to forward the SSH agent during build; docker/cli#1419. The Dockerfile syntax is still not in the official specs, but you can use a syntax=.. directive in your Dockerfile to use a frontend that supports it (see the example/instructions in the pull request).

That pull request will be part of the upcoming 18.09 release.

@kalenp
Copy link

kalenp commented Feb 15, 2019

It looks like this is now available in the 18.09 release. Since this thread comes up before the release notes and medium post, I'll cross-post here.

Release Notes:
https://docs.docker.com/develop/develop-images/build_enhancements/#using-ssh-to-access-private-data-in-builds

Medium Post:
https://medium.com/@tonistiigi/build-secrets-and-ssh-forwarding-in-docker-18-09-ae8161d066

Very exciting.

@rmoriz
Copy link
Contributor

rmoriz commented Feb 15, 2019

@kalenp see also moby/buildkit#760 and moby/buildkit#825

@AkihiroSuda
Copy link
Member

I think we can close this because we have docker build --ssh now

@ghost
Copy link

ghost commented Sep 17, 2019

Related compose issue here: docker/compose#6865. Functionality to use Compose and expose SSH agent socket to containers noted to be landing in next release candidate, 1.25.0-rc3 (releases).

@thaJeztah
Copy link
Member

Stumbled upon this one while looking for something

I think we can close this because we have docker build --ssh now

I think it would be useful to have the same feature for docker run (and alike), not just for docker build. Ideally we'd use a similar implementation (i.e., not just "bind-mount") so that this can work beyond a "local" daemon as well.

Maybe there's other tickets around this, but let me re-open this one (in case there isn't)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/security exp/expert exp/intermediate kind/feature Functionality or other elements that the project doesn't currently have. Features are new and shiny
Projects
None yet
Development

No branches or pull requests