New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add with relative path to parent directory fails with "Forbidden path" #2745
Comments
The build actually happens in It was not clear to me that the directory is moved to another directory in |
#2692 is a good first pass at making this more clear. |
Sorry to hijack, but this seems completely broken to me. I've got a grand total of about 2 hours of Docker experience, so this is likely a problem with my understanding than docker. I'm going to be creating approximately 10 images from our source tree. But to be able to use the ADD command, I would have to put the Dockerfile in the root of the tree, so only 1 image could be built. Not to mention the fact that this would result in a context of close to 100 megabytes. I could do an ADD with URL's, but this would make it much more difficult for dev's to create images for testing purposes. Another option would be to add source to the image via volumes instead of adding it, but this really seems contrary to the spirit of Docker. It seems to me that one partial, easy solution would be to modify the build command so that the context and the Dockerfile could be specified separately. |
Is there any reason for that change? Why can't we add files from the parent folder? |
I find this behavior fairly frustrating, especially for "meta" Dockerfiles. E.g. I have a /dev folder I do most work in, and I want /dev/environments git repo which has e.g. /dev/environments/main/Dockerfile. It's very annoying not allowing that Dockerfile to:
To add /dev/otherProject as /root/project. Using an absolute path breaks sharing this Dockerfile with other developers. |
Another note - the only possible workaround I've found is to symlink the Dockerfile to the root /dev folder. This results in a very long and resource intensive "Uploading context" stage, which appears to (quite needlessly) copy all of the project directories to a temporary location. If the point of containers is isolation, and Dockerfiles (rightfully) don't seem to allow interacting with the build system, why does Docker need to copy all of the files? Why does it copy files that are not referenced in the Dockerfile at all? |
@wwoods the short answer is that the docker client does not parse the Dockerfile. It tgz's the context (current dir and all subdirs) up, passed it all to the server, which then uses the Dockerfile in the tgz to do the work. |
That seems pretty flawed, in that it greatly restricts the viable scope of Dockerfiles. Specifically, it disallows Dockerfiles layered on top of existing code configurations, forcing users to structure their code around the Dockerfile rather than the other way around. I understand the reasoning behind allowing the daemon to be on a remote server, but it seems like it would greatly improve Docker's flexibility to parse the Dockerfile and upload only specified sections. This would enable relative paths and reduce bandwidth in general, while making usage more intuitive (the current behavior is not very intuitive, particularly when playing on a dev box). |
Would it be possible to amend an option flag that could allow users to manually add specific directories to expand the context? |
That would work, but again way less intuitive than just parsing the Dockerfile. Also, you're wasting a lot of upload bytes for no reason by uploading a lot of files you don't necessarily use. |
@wwoods - even if the client were to parse the Dockerfile and work out what files to send and which to discard, we still can't afford to bust out of the current directory and let your builder access your client's entire file system. There are other solutions to your scenario that don't increase our insecurity footprint. either way, restricting the Dockerfile to the current context is not going to change, so I'm going to close this. |
How is it insecure to allow the builder to access files readable to the user? Why are you superseding linux file security? Please realize that this greatly limits the use cases of Docker and makes it much less enjoyable to add to existing workflows. |
@wwoods I set up a GitHub repository with code and a Dockerfile in it, you clone it to ~/code/gh-repos/, cd to ~/foobarbaz and run We're not going to allow Docker ADD to bust out of its context via |
Gotcha... still really need some workaround for this issue. Not being able to have a Dockerfile refer to its parents sure limits usage. Even if there's just a very verbose and scary flag to allow it, not having the option makes Docker useless for certain configurations (again, particularly when you have several images you want to build off of a common set of code). And the upload bandwidth is still a very preventable problem with the current implementation. |
How about my suggestion of adding an option to the build command so that the root directory can be specified as a command line option? That won't break any security, and should cover every use case discussed here. |
Sounds good to me. Most confusing part would be the paths in the Dockerfile now "seem" incorrect because they are relative to a potentially unexpected root. But, since that root path is on the command line, it would be pretty easy to see what was going wrong. And a simple comment in the Dockerfile would suffice; maybe even an EXPECTS root directive or something along those lines to provide a friendly error message if the Dockerfile were ran without a specified root directory. |
first up, when I want to build several images from common code, I create a common image that brings the code into that image, and then build FROM that. more significantly, I prefer to build from a version controlled source - ie fundamentally, no, when I put on my black-hat, I don't just give you a Dockerfile, I tell you what command to run. let me re-iterate. there are other ways to solve your issues, and making docker less secure is not the best one. |
Which means you now have two repositories: one that contains the build scripts, and another containing the code. Which have to be properly synchronized. You can use git submodules or git subtrees or ad hoc methods, but all of those options have serious drawbacks. There are many reasons, some good, some bad, that corporations tend to have a single repository containing everything. AFAICT, Facebook is one example of a place that only has a single source repository that contains everything. |
Sometimes you definitely do want to build from a random place with random files - generating a local test image not based off of a commit, for instance. If you have a different testing server or just want to run several different tests at once locally without worrying about database interactions between them, this would be really handy. There's also the issue where Dockerfiles can only RUN commands they have all information for - if your versioned remote source is password / key protected, this means you'd have to give your Docker image the password / key information anyhow to perform a build strictly with Docker. There might be ways to solve these issues, but that doesn't mean they're pleasant, intuitive, or particularly easy to track down. I don't think docker would be less secure by allowing a change of context on the command line. I understand the reasons for not transparently stepping outside of the build context. On the other hand, not knowing what you're running will always be a security risk unless you're running it in a virtual machine or container anyway. To get around the Dockerfile limitations, packages might have to ship with a Makefile or script that could easily commit the very offenses you're trying to avoid. I don't think that the "docker build" command is the right place for the level of security you're talking about. Making it harder to use / require more external scaffolding makes it more tempting to step outside of Docker for the build process, exacerbating the exact issues you're worried about. |
An alternative approach is to let you use any docker file from a given context. This keeps things secure but also increases flexibility. I'm looking into this now, you can track here #2112 |
That would work fine; I'll just point out the security implications are the same. From my perspective that's fine though, in that the context change is very transparent on the command line. As for why they're the same, you have:
Equivalent to the aforementioned
I do like the Dockerfile / context specification split in #2112 better though. Good luck with that :) Hopefully it gets merged in. |
I have the following directory structure:
in the Dockerfile I have: if I
as I understood it I don't understand what is wrong here. The example I copied from does |
if I try to build the example I was working from (docker-dna/rabbitmq) it throws the same error too, so it's not just my modified version I am using boot2docker on OSX |
Oh, it's this boot2docker/boot2docker#143 works after upgrading boot2docker to new version |
Note: Dockerfile ADD do not accept relative files. moby/moby#2745
@vilas27 Yes, this is tutorial describes how to set a context directory. Which implies that the problem here is that
People should refer to extended description on the website: https://docs.docker.com/engine/reference/commandline/build/
Reading which it's quite easy to grasp what "Forbidden path" really means. It doesn't have anything to do with permissions.
Writing Dockerfiles for your project doesn't sound "get started" to me. It requires quite an advanced knowledge. Getting started tutorial describes setting up very simple project that doesn't require any prior knowledge. But if you need to setup anything more complex you must know how Docker works. And yes, it requires quite a lot of time to figure out. |
In Unix, I don't see how this problem you have is related to "Forbidden path" issue. Error message tells you that there's no folder named |
if That usage example says run the executable "docker" with command (option) "build" and give it a path. And then says "build an image from a docker file". So the path must be to the docker file, yes? If there are other params required it should say that, not make me look up "man pages". I still dont understand why its choosing to use the temp folder and then complaining about it when I gave the path to the docker file as the parameter and that has a relative path in it. |
Your |
It doesn't have to use the temp folder, it's just an implementation detail. What really happens is that it uses a remote build. I.e. while physically it's just a different location of the same machine, logically,
That is also a Unix convention: https://unix.stackexchange.com/a/6974
This sounds like a valid point. |
I hit this all the time with go projects, where it is common for packages to live at the root of the project, and not in the buildable command directory (where i want the Dockerfile to live).
I can't build the Dockerfile and use the newest version of the utilities dir if I build from the I could of course run Here are the workarounds I've come up with (some seen in this issue): Move Dockerfiles to project root
Make a temporary dir for outside dependencies
Run Dockerfiles from root context
|
@integrii |
After 153 comments I would have figured this would be understood as a basic needed feature... using asp.net, the build is based off of a directory. If you're recommending me to have separate csprojects just for a docker build, that is crazy. The official dotnet-core-architecture example shows building outside of docker, then just copying the built contents into a docker container... that can't seriously be the considered way of doing this. Our directory is almost 800mb, I'm not sending that much for each project that needs to build. Please, just give us this basic feature. |
@Dispersia A lot of people confuse docker build and Dockerfile with build scripts. Dockerfile was never intended to be a general purpose build tool. Though you can use Dockerfile this way, you're on your own with it's caveats. Yes, if you want a production image, you should run a container to build your artifacts and then copy these build artifacts to the place where Dockerfile is located or change the context to the directory with your build artifacts. |
See it this way: Dockerfile is a set of instructions to copy your runtime files to an empty docker image. |
@Vanuan that's the point of multi-stage builds, correct? If project B references project A, project B won't build, because it expects a csproj reference to to the project, not to a dll reference, even if you get the artifacts of project A, unless you make a separate csproj to reference by compiled dll instead of source, it won't build. And ya, I am confused what you mean "docker build vs dockerfile with build scripts". Docker build is for Dockerfile, correct? If not, I don't feel that should be the description on the top of the Docker Build page :P |
So you're saying you're using csproj files to build multiple projects simultaneously? In this case you need to access all the source files, which is 800 mb in your case. I don't see what you expect. You either build them inside or outside a container. In either case you'll end up with dll and exe files which you then put into an image |
Structure:
If I request API 1to be built, i do NOT need to send library 2, library 3, and API2. I ONLY need Library 1 and API 1. This is a C# project reference: Your Options: A. Change Project Reference's to local dll's, destroying all intellisense for every library B. Hot-Swap project references to specifically only build for dll as needed for each individual docker build, (hundred of hot swaps, sounds fun) C. Send 800mb per build, when only 2 of those are actually needed D. Don't use Docker for anything build related, one of the main reasons I want to move to docker (remove dependency on developer machine, one might use mac with .net core 1.1 installed, one might have 2.0 installed on windows, etc). E. Fix Docker and make everyone happy. |
The daemon still needs to have all files sent. Some options that have been discussed;
docker build \
--context lib1:/path/to/library-1 \
--context lib2:/path/to/library-2 \
--context api1:/path/to/api1 \
. Inside the Dockerfile, those paths could be accessible through (e.g.) |
Yes, I was going down the line of the multiple build contexts. That looks beautiful and would love that feature! Didn't see it portraid quite that way but that looks great to me at least. I could just manage my own paths |
I didn't say so. I said "don't use Dockefile for anything build related". You could perfectly use Docker for builds:
|
@thaJeztah Multiple build context sounds exactly what I am looking for! How soon can we see this feature? |
So far it has just been a possible approach that was discussed; it would need a more thorough design, and also be looked at in light of future integration with https://github.com/moby/buildkit (which has tons of improvements over the current builder, so possible has other approaches/solutions for this problem) I can open a separate issue for the proposal for discussion; if design/feature has decided on, contributions are definitely welcome |
For .NET I resolved with a workaround... See: |
I just encountered this issue. I have multiple multiple dockerfiles and a docker-compose housed in one repo that fires up. I've been using an nginx container to proxy my client-side code with the backend, but I am now trying to dockerize the webpack configuration so that it will copy over the code and watch for changes. I've run into this forbidden issue, since my COPY command has to reach into a sibling directory. |
Opened #37129 with a proposal for multiple build-contexts |
With the previous configuration `docker-compose build` was always failing. This moves the dockerfiles in the parent dir and changes paths as a result. Ref moby/moby#2745
With the previous configuration `docker-compose build` was always failing. This moves the dockerfiles in the parent dir and changes paths as a result. Ref moby/moby#2745
If you have an add line in the Dockerfile which points to another directory, the build of the image fails with the message "Forbidden path".
Example:
Gives:
I would expect the file to be written to
/tmp/some-file
, not/tmp/relative-add/some-file
.The text was updated successfully, but these errors were encountered: