Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add with relative path to parent directory fails with "Forbidden path" #2745

Closed
Sjord opened this issue Nov 18, 2013 · 167 comments
Closed

Add with relative path to parent directory fails with "Forbidden path" #2745

Sjord opened this issue Nov 18, 2013 · 167 comments

Comments

@Sjord
Copy link
Contributor

Sjord commented Nov 18, 2013

If you have an add line in the Dockerfile which points to another directory, the build of the image fails with the message "Forbidden path".

Example:

FROM tatsuru/debian
ADD ../relative-add/some-file /tmp/some-file

Gives:

$ ../bundles/0.6.6-dev/binary/docker-0.6.6-dev build .
Uploading context 20480 bytes
Step 1 : FROM tatsuru/debian
 ---> 25368de90486
Step 2 : ADD ../relative-add/some-file /tmp/some-file
Error build: Forbidden path: /tmp/relative-add/some-file

I would expect the file to be written to /tmp/some-file, not /tmp/relative-add/some-file.

@Sjord
Copy link
Contributor Author

Sjord commented Nov 18, 2013

The build actually happens in /tmp/docker-12345, so a relative path like ../relative-add/some-file is relative to /tmp/docker-12345. It would thus search for /tmp/relative-add/some-file, which is also shown in the error message. It is not allowed to include files from outside the build directory, so this results in the "Forbidden path" message.

It was not clear to me that the directory is moved to another directory in /tmp before the build, or that the paths are resolved after moving the directory. It would be great if this can be fixed, or if the error message could be clearer. For example: "relative paths outside the sandbox are not currently supported" when supplying a relative path, or "The file %s is outside the sandbox in %s and can not be added" instead of "Forbidden path".

@tianon
Copy link
Member

tianon commented Nov 18, 2013

#2692 is a good first pass at making this more clear.

@bryanlarsen
Copy link

Sorry to hijack, but this seems completely broken to me. I've got a grand total of about 2 hours of Docker experience, so this is likely a problem with my understanding than docker.

I'm going to be creating approximately 10 images from our source tree. But to be able to use the ADD command, I would have to put the Dockerfile in the root of the tree, so only 1 image could be built. Not to mention the fact that this would result in a context of close to 100 megabytes.

I could do an ADD with URL's, but this would make it much more difficult for dev's to create images for testing purposes.

Another option would be to add source to the image via volumes instead of adding it, but this really seems contrary to the spirit of Docker.

It seems to me that one partial, easy solution would be to modify the build command so that the context and the Dockerfile could be specified separately.

@karellm
Copy link

karellm commented Dec 19, 2013

Is there any reason for that change? Why can't we add files from the parent folder?

@wwoods
Copy link

wwoods commented Feb 17, 2014

I find this behavior fairly frustrating, especially for "meta" Dockerfiles. E.g. I have a /dev folder I do most work in, and I want /dev/environments git repo which has e.g. /dev/environments/main/Dockerfile. It's very annoying not allowing that Dockerfile to:

ADD ../../otherProject /root/project

To add /dev/otherProject as /root/project. Using an absolute path breaks sharing this Dockerfile with other developers.

@wwoods
Copy link

wwoods commented Feb 17, 2014

Another note - the only possible workaround I've found is to symlink the Dockerfile to the root /dev folder. This results in a very long and resource intensive "Uploading context" stage, which appears to (quite needlessly) copy all of the project directories to a temporary location. If the point of containers is isolation, and Dockerfiles (rightfully) don't seem to allow interacting with the build system, why does Docker need to copy all of the files? Why does it copy files that are not referenced in the Dockerfile at all?

@SvenDowideit
Copy link
Contributor

@wwoods the short answer is that the docker client does not parse the Dockerfile. It tgz's the context (current dir and all subdirs) up, passed it all to the server, which then uses the Dockerfile in the tgz to do the work.

@wwoods
Copy link

wwoods commented Feb 18, 2014

That seems pretty flawed, in that it greatly restricts the viable scope of Dockerfiles. Specifically, it disallows Dockerfiles layered on top of existing code configurations, forcing users to structure their code around the Dockerfile rather than the other way around. I understand the reasoning behind allowing the daemon to be on a remote server, but it seems like it would greatly improve Docker's flexibility to parse the Dockerfile and upload only specified sections. This would enable relative paths and reduce bandwidth in general, while making usage more intuitive (the current behavior is not very intuitive, particularly when playing on a dev box).

@waysidekoi
Copy link

Would it be possible to amend an option flag that could allow users to manually add specific directories to expand the context?

@wwoods
Copy link

wwoods commented Feb 18, 2014

That would work, but again way less intuitive than just parsing the Dockerfile. Also, you're wasting a lot of upload bytes for no reason by uploading a lot of files you don't necessarily use.

@SvenDowideit
Copy link
Contributor

@wwoods - even if the client were to parse the Dockerfile and work out what files to send and which to discard, we still can't afford to bust out of the current directory and let your builder access your client's entire file system.

There are other solutions to your scenario that don't increase our insecurity footprint.

either way, restricting the Dockerfile to the current context is not going to change, so I'm going to close this.

@wwoods
Copy link

wwoods commented Feb 18, 2014

How is it insecure to allow the builder to access files readable to the user? Why are you superseding linux file security? Please realize that this greatly limits the use cases of Docker and makes it much less enjoyable to add to existing workflows.

@unclejack
Copy link
Contributor

@wwoods I set up a GitHub repository with code and a Dockerfile in it, you clone it to ~/code/gh-repos/, cd to ~/foobarbaz and run docker build -t foobarbaz .. Let's say I'm a bad guy and I add something like this to the Dockerfile: ADD .. /foo. The image will now contain your entire home directory and anything you might have there. Let's say the resulting image also ends up on the Internet on some registry. Everyone who has the image also has your data - browser history & cookies, private documents, password, public and private SSH keys, some internal company data and some personal data.

We're not going to allow Docker ADD to bust out of its context via .. or anything like it.

@wwoods
Copy link

wwoods commented Feb 18, 2014

Gotcha... still really need some workaround for this issue. Not being able to have a Dockerfile refer to its parents sure limits usage. Even if there's just a very verbose and scary flag to allow it, not having the option makes Docker useless for certain configurations (again, particularly when you have several images you want to build off of a common set of code). And the upload bandwidth is still a very preventable problem with the current implementation.

@bryanlarsen
Copy link

How about my suggestion of adding an option to the build command so that the root directory can be specified as a command line option? That won't break any security, and should cover every use case discussed here.

@wwoods
Copy link

wwoods commented Feb 18, 2014

Sounds good to me. Most confusing part would be the paths in the Dockerfile now "seem" incorrect because they are relative to a potentially unexpected root. But, since that root path is on the command line, it would be pretty easy to see what was going wrong. And a simple comment in the Dockerfile would suffice; maybe even an EXPECTS root directive or something along those lines to provide a friendly error message if the Dockerfile were ran without a specified root directory.

@SvenDowideit
Copy link
Contributor

first up, when I want to build several images from common code, I create a common image that brings the code into that image, and then build FROM that. more significantly, I prefer to build from a version controlled source - ie docker build -t stuff http://my.git.org/repo - otherwise I'm building from some random place with random files.

fundamentally, no, when I put on my black-hat, I don't just give you a Dockerfile, I tell you what command to run.

let me re-iterate. there are other ways to solve your issues, and making docker less secure is not the best one.

@bryanlarsen
Copy link

Which means you now have two repositories: one that contains the build scripts, and another containing the code. Which have to be properly synchronized. You can use git submodules or git subtrees or ad hoc methods, but all of those options have serious drawbacks. There are many reasons, some good, some bad, that corporations tend to have a single repository containing everything. AFAICT, Facebook is one example of a place that only has a single source repository that contains everything.

@wwoods
Copy link

wwoods commented Feb 18, 2014

Sometimes you definitely do want to build from a random place with random files - generating a local test image not based off of a commit, for instance. If you have a different testing server or just want to run several different tests at once locally without worrying about database interactions between them, this would be really handy. There's also the issue where Dockerfiles can only RUN commands they have all information for - if your versioned remote source is password / key protected, this means you'd have to give your Docker image the password / key information anyhow to perform a build strictly with Docker.

There might be ways to solve these issues, but that doesn't mean they're pleasant, intuitive, or particularly easy to track down. I don't think docker would be less secure by allowing a change of context on the command line. I understand the reasons for not transparently stepping outside of the build context. On the other hand, not knowing what you're running will always be a security risk unless you're running it in a virtual machine or container anyway. To get around the Dockerfile limitations, packages might have to ship with a Makefile or script that could easily commit the very offenses you're trying to avoid. I don't think that the "docker build" command is the right place for the level of security you're talking about. Making it harder to use / require more external scaffolding makes it more tempting to step outside of Docker for the build process, exacerbating the exact issues you're worried about.

@thedeeno
Copy link

An alternative approach is to let you use any docker file from a given context. This keeps things secure but also increases flexibility. I'm looking into this now, you can track here #2112

@wwoods
Copy link

wwoods commented Feb 18, 2014

That would work fine; I'll just point out the security implications are the same. From my perspective that's fine though, in that the context change is very transparent on the command line.

As for why they're the same, you have:

docker build -f Dockerfile ..

Equivalent to the aforementioned

docker build -c .. .

I do like the Dockerfile / context specification split in #2112 better though. Good luck with that :) Hopefully it gets merged in.

@anentropic
Copy link

I have the following directory structure:

foo/Dockerfile
foo/shared
foo/shared/bar

in the Dockerfile I have:
ADD shared ./imported

if I cd foo and docker build . I get:

build: Forbidden path outside the build context: shared (/mnt/sda1/tmp/docker-build689526572/shared)

as I understood it foo/ is the context of my build so shared should not be outside the context..?

I don't understand what is wrong here. The example I copied from does ADD . ./somedir which I was trying to avoid, and that doesn't work either when I try to do it. Neither does ADD ./shared ./somedir.

@anentropic
Copy link

if I try to build the example I was working from (docker-dna/rabbitmq) it throws the same error too, so it's not just my modified version

I am using boot2docker on OSX

@anentropic
Copy link

Oh, it's this boot2docker/boot2docker#143

works after upgrading boot2docker to new version

@Vanuan
Copy link

Vanuan commented Jan 3, 2018

@vilas27 Yes, this is tutorial describes how to set a context directory.

Which implies that the problem here is that docker build --help is not descriptive enough:

docker build --help

Usage:	docker build [OPTIONS] PATH | URL | -

Build an image from a Dockerfile

People should refer to extended description on the website:

https://docs.docker.com/engine/reference/commandline/build/

The docker build command builds Docker images from a Dockerfile and a “context”. A build’s context is the set of files located in the specified PATH or URL. The build process can refer to any of the files in the context. For example, your build can use a COPY instruction to reference a file in the context.

The URL parameter can refer to three kinds of resources: Git repositories, pre-packaged tarball contexts and plain text files.

Reading which it's quite easy to grasp what "Forbidden path" really means. It doesn't have anything to do with permissions.

@StingyJack

I should not need to digest tomes of information just to "get started".

Writing Dockerfiles for your project doesn't sound "get started" to me. It requires quite an advanced knowledge. Getting started tutorial describes setting up very simple project that doesn't require any prior knowledge. But if you need to setup anything more complex you must know how Docker works. And yes, it requires quite a lot of time to figure out.

@StingyJack
Copy link

When something exists but you are not permitted access to it, it is forbidden. If its not a valid for some other reason, its Invalid for some other reason.

Writing Dockerfiles for your project doesn't sound "get started" to me

I started with the one that is created when adding docker support to a VS project. It would run in a debugger but that isnt very useful if I need to make a deployable image. Trying to use the CLI to build the image outside of VS just reports the temp folder error. The file looks correct ...

FROM microsoft/aspnet:4.7
ARG source
WORKDIR /inetpub/wwwroot
COPY ${source:-obj/Docker/publish} .
PS C:\WINDOWS\system32> docker build C:\Users\astanton\source\repos\WebApplication3

"Docker, build using this folder and the docker file in it." seems to be what the CLI help tells me this means. Nothing about a build context or additional paths, just the one path to direct it to, be it a filesystem path, or a URL, or a "-" (the dash must be a *nix convention). The command looks correct...
image

Both the command and the dockerfile look correct, yet it does not work.

@Vanuan
Copy link

Vanuan commented Jan 9, 2018

Nothing about a build context or additional paths

In Unix, --help usually means "short help". If you need extended info you should use man:
https://unix.stackexchange.com/questions/86571/command-help-vs-man-command

I don't see how this problem you have is related to "Forbidden path" issue.

Error message tells you that there's no folder named obj\Docker\publish in C:\Users\astanton\source\repos\WebApplication3\. Where do you see "Forbidden path"?

@StingyJack
Copy link

I don't see how this problem you have is related to "Forbidden path" issue.
You are applying the word with an incorrect meaning. I don't mean offense, i can only speak in one language and you clearly can communicate in more than one. Personally I would rather be corrected than to continue to speak incorrectly.

if --help is the short help (that goes on for a few screens worth of console), then what is -h ?

That usage example says run the executable "docker" with command (option) "build" and give it a path. And then says "build an image from a docker file". So the path must be to the docker file, yes? If there are other params required it should say that, not make me look up "man pages".

I still dont understand why its choosing to use the temp folder and then complaining about it when I gave the path to the docker file as the parameter and that has a relative path in it.

@javabrett
Copy link
Contributor

$ docker build --help
Usage:	docker build [OPTIONS] PATH | URL | -
Build an image from a Dockerfile
Options:
...
-f, --file string                Name of the Dockerfile (Default is 'PATH/Dockerfile')

PATH points to your build context, which must include all the content you want to access in your build. Default Dockerfile is in that directory, but you can override that with -f.

Your PATH needs to contain a obj/Docker/publish by the look of it.

@Vanuan
Copy link

Vanuan commented Jan 10, 2018

I still dont understand why its choosing to use the temp folder and then complaining about it when I gave the path to the docker file as the parameter and that has a relative path in it.

It doesn't have to use the temp folder, it's just an implementation detail. What really happens is that it uses a remote build. I.e. while physically it's just a different location of the same machine, logically, docker build doesn't build on your local machine, it builds on the remote machine. So you must provide all the files the build needs inside the directory or tarball specified by PATH or URL so that these files are copied to that remote machine and used to produce an image. This approach is called "the client/server architecture".

if --help is the short help (that goes on for a few screens worth of console), then what is -h ?

That is also a Unix convention: https://unix.stackexchange.com/a/6974
CLI flags that are composed of one letter are specified using a minus (-h). They usually have an expanded version that has identical meaning, but longer to type (--help). I.e. both -h and --help should produce the same output.

says "build an image from a docker file". So the path must be to the docker file, yes? If there are other params required it should say that, not make me look up "man pages".

This sounds like a valid point.
Please create a new issue if you want the docker build --help to be changed to be more descriptive.

@integrii
Copy link

integrii commented Jan 15, 2018

I hit this all the time with go projects, where it is common for packages to live at the root of the project, and not in the buildable command directory (where i want the Dockerfile to live).

project/ 
    pkg/
        utilities/
            utils.go
    cmd/
        binary-name/
            main.go
            Dockerfile

I can't build the Dockerfile and use the newest version of the utilities dir if I build from the binary-name directory.

I could of course run go get in my Dockerfile, but often times I want to use the local (modified) versions of my packages that are not yet published upstream.

Here are the workarounds I've come up with (some seen in this issue):

Move Dockerfiles to project root
Move your Dockerfile to the root of your project and re-factor it to add files and directories from the project root path onward (ADD ./pkg /go/src/github.com/integrii/project/pkg.

  • Cons: Some people have huge projects and Docker uploads the entire "context" its running in to whatever the building server is, which is sometimes a remote server over VPN. This is too slow for some people.

Make a temporary dir for outside dependencies
Create a Makefile in your binary-name directory that copies in outside dependencies to a temporary directory, then calls docker build. Refactor your Dockerfile to add files from the temporary directory created in your Makefile when building. Have the Makefile then call a clean that deletes the temporary directory.

  • Cons: Your Makefile may break, leaving garbage duplicated files around. You can make this less of an issue by adding the temporary directory to a .gitignore file.

Run Dockerfiles from root context
Run your Dockerfile from the project root "context", but let it keep living in the binary-name dir (docker build -f cmd/binary-name/Dockerfile .). Refactor the Dockerfile to add files relative the root of the project.

  • Cons: Still does not work with huge repos, that end up shipping the entire project to a remote docker server in some environments.

@Vanuan
Copy link

Vanuan commented Jan 15, 2018

@integrii
Read above the recommended option: do not use Dockerfile for development. Use go base image + docker-compose + mounted folders.

@Dispersia
Copy link

Dispersia commented Feb 14, 2018

After 153 comments I would have figured this would be understood as a basic needed feature... using asp.net, the build is based off of a directory. If you're recommending me to have separate csprojects just for a docker build, that is crazy. The official dotnet-core-architecture example shows building outside of docker, then just copying the built contents into a docker container... that can't seriously be the considered way of doing this. Our directory is almost 800mb, I'm not sending that much for each project that needs to build.

Please, just give us this basic feature.

@Vanuan
Copy link

Vanuan commented Feb 14, 2018

@Dispersia A lot of people confuse docker build and Dockerfile with build scripts. Dockerfile was never intended to be a general purpose build tool.

Though you can use Dockerfile this way, you're on your own with it's caveats.

Yes, if you want a production image, you should run a container to build your artifacts and then copy these build artifacts to the place where Dockerfile is located or change the context to the directory with your build artifacts.

@Vanuan
Copy link

Vanuan commented Feb 14, 2018

See it this way: Dockerfile is a set of instructions to copy your runtime files to an empty docker image.

@Dispersia
Copy link

@Vanuan that's the point of multi-stage builds, correct? If project B references project A, project B won't build, because it expects a csproj reference to to the project, not to a dll reference, even if you get the artifacts of project A, unless you make a separate csproj to reference by compiled dll instead of source, it won't build. And ya, I am confused what you mean "docker build vs dockerfile with build scripts". Docker build is for Dockerfile, correct? If not, I don't feel that should be the description on the top of the Docker Build page :P

@Vanuan
Copy link

Vanuan commented Feb 14, 2018

So you're saying you're using csproj files to build multiple projects simultaneously? In this case you need to access all the source files, which is 800 mb in your case. I don't see what you expect. You either build them inside or outside a container. In either case you'll end up with dll and exe files which you then put into an image

@Dispersia
Copy link

Dispersia commented Feb 14, 2018

Structure:

  • Libraries
    --- Library 1
    --- Library 2
    --- Library 3
  • APIs
    --- API 1 - reference library 1
    --- API 2 - references library 2 and library 3

If I request API 1to be built, i do NOT need to send library 2, library 3, and API2. I ONLY need Library 1 and API 1.

This is a C# project reference:
< ProjectReference Include="..\..\..\BuildingBlocks\EventBus\EventBusRabbitMQ\EventBusRabbitMQ.csproj" />

Your Options:

A. Change Project Reference's to local dll's, destroying all intellisense for every library

B. Hot-Swap project references to specifically only build for dll as needed for each individual docker build, (hundred of hot swaps, sounds fun)

C. Send 800mb per build, when only 2 of those are actually needed

D. Don't use Docker for anything build related, one of the main reasons I want to move to docker (remove dependency on developer machine, one might use mac with .net core 1.1 installed, one might have 2.0 installed on windows, etc).

E. Fix Docker and make everyone happy.

@thaJeztah
Copy link
Member

The daemon still needs to have all files sent. Some options that have been discussed;

docker build \
  --context lib1:/path/to/library-1 \
  --context lib2:/path/to/library-2 \
  --context api1:/path/to/api1 \
  .

Inside the Dockerfile, those paths could be accessible through (e.g.) COPY --from context:lib1

@Dispersia
Copy link

Dispersia commented Feb 14, 2018

Yes, I was going down the line of the multiple build contexts. That looks beautiful and would love that feature! Didn't see it portraid quite that way but that looks great to me at least. I could just manage my own paths

@Vanuan
Copy link

Vanuan commented Feb 14, 2018

@Dispersia

Don't use Docker for anything build related

I didn't say so. I said "don't use Dockefile for anything build related". You could perfectly use Docker for builds:

# docker-compose.yml
version: '3'
services:
  api1:
    image: microsoft/aspnetcore-build:2.0
    volumes:
      - ./src:/src
    work_dir: /src/api1
    command: dotnet restore; dotnet publish -c Release -o out 
  api2:
    image: microsoft/aspnetcore-build:2.0
    volumes:
      - ./src:/src
    work_dir: /src/api2
    command: dotnet restore; dotnet publish -c Release -o out 

docker-compose run api1
docker-compose run api2

@rulai-huajunzeng
Copy link

@thaJeztah Multiple build context sounds exactly what I am looking for! How soon can we see this feature?

@thaJeztah
Copy link
Member

So far it has just been a possible approach that was discussed; it would need a more thorough design, and also be looked at in light of future integration with https://github.com/moby/buildkit (which has tons of improvements over the current builder, so possible has other approaches/solutions for this problem)

I can open a separate issue for the proposal for discussion; if design/feature has decided on, contributions are definitely welcome

@tfsantosbr
Copy link

tfsantosbr commented Apr 5, 2018

For .NET I resolved with a workaround...
Created a docker-compose for build and in original docker-compose generate image for production

See:
https://github.com/taigosantos/dotnet-core-poc/tree/master/docker/dotnet-docker

@defields923
Copy link

defields923 commented May 10, 2018

I just encountered this issue. I have multiple multiple dockerfiles and a docker-compose housed in one repo that fires up. I've been using an nginx container to proxy my client-side code with the backend, but I am now trying to dockerize the webpack configuration so that it will copy over the code and watch for changes. I've run into this forbidden issue, since my COPY command has to reach into a sibling directory.

@thaJeztah
Copy link
Member

Opened #37129 with a proposal for multiple build-contexts

markelog added a commit to grafana/grafana that referenced this issue May 27, 2019
With the previous configuration `docker-compose build` was always failing.
This moves the dockerfiles in the parent dir and changes paths as a result.

Ref moby/moby#2745
markelog added a commit to grafana/grafana that referenced this issue May 27, 2019
With the previous configuration `docker-compose build` was always failing.
This moves the dockerfiles in the parent dir and changes paths as a result.

Ref moby/moby#2745
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests