New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ISSUE: Can't get systemd to run with 1.11 #22285
Comments
Perhaps it is better to paste what the error msg look like for your containers run with such |
Have you got some way to reproduce this easily, I don't have any systemd containers to hand? Any error messages would be useful, although there may be no useful ones. I would expect you would need |
I think I'm seeing this or at least something similar running Fedora 23. I've been trying to run the official centos:7 image and it works like a charm with Fedora provided docker package version 1.9.1 but if I upgrade to 1.11 from the Docker repos it breaks. The errors I'm seeing are one of
docker.service file
|
|
Ok great, currently we would expect systemd to need both of those. |
I'll close this issue, but if you think there's something that needs to be improved, or have a suggestion to document it somewhere (although, I don't think we describe how to run systemd in a container currently), feel free to open a pull request |
Alright, finally got the time to recreate. Here's 1.10:
Now, let's do the same on 1.11:
As you can see, in the 0.11 case, systemd doesn't start properly. However, if I in 0.11 do the following:
So, if I do So, something change when going from 0.10 to 0.11 that broke this. What happened? Thanks! /b3 |
@thaJeztah Can you please reopen? The suggested parameters were sufficient in |
I just tried, and it looks like something changed indeed, only Wondering why you need systemd in your container here, a very simple FROM ubuntu:16.04
RUN apt-get update && apt-get install openssh-server -y
RUN mkdir -p /var/run/sshd && chmod 0755 /var/run/sshd \
&& echo 'root:screencast' | chpasswd \
&& sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config \
&& sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd
EXPOSE 22
CMD ["-D"]
ENTRYPOINT ["/usr/sbin/sshd"] |
@thaJeztah Long story about systemd. It essentially boils down to users not being familiar with containers, but only knowing how to deal with systemd. And while your suggestion seems simple to you and me, most users would find it highly complicated compared to "apt-get install openssh-server" (sample of how users think). |
@beetree yes, unfortunately there's still a lot of educating needed; users still thing of containers as "virtual machines", whereas they should be more as "bundled executables" |
@thaJeztah Yes, the user is certainly wrong :P |
@beetree not in all cases, but in most cases, I don't see a reason to do it (just my 0.02c) |
Ok, as far as I can see it fails if |
we're seeing this as well on 1.11, @thaJeztah it's been made abundantly clear what Docker's perspective is on multi-process containers, but I'd like to share our use case, in the interest of providing at least a single data-point on the types of reasons users may be interested in running systemd in Docker containers. Hopefully it helps get past this "you're doing it wrong" attitude i see so often in docker bug-reports 😄 We (Treehouse) offer a feature to our students called "Workspaces", which is online code editor and terminal that our students use to work on projects associated with their courses. Each Workspace is spun up as an on-demand docker container running the backing services that the frontend code-editor talks to, with persistence handled by bind-mounting gluster volumes into the container. The services that make up an active Workspace include things like:
we use docker's dynamic port mapping to expose these services (and other common dev-preview ports for e.g. flask, etc) on the host, and inject the routes into Redis for our load-balancer. because these are docker containers, we're able to run anywhere from 100-200 Workspaces on a given host, which is awesome. Having to do this in actual VMs would be cost-prohibitive, so Docker's worked really well for us in that regard. With experience, we've found that treating each active Workspace as a single container is optimal for several reasons:
there's probably some other things i'm forgetting, but those are the big ones for us at present. ultimately, we hope Docker can see that there are some legitimate use cases for multi-process containers, though it's clear that best-practice for most use cases is still the single-process container model. thanks for reading, and thanks for a great product! we love Docker, and hope we can keep using it well into the future! |
@nathwill Docker-on-Mac does not use AppArmor or SELinux. It is based on Alpine not Ubuntu. Can you try running with In general though even with multiprocess containers I might go for something simpler than systemd. |
ah, ok. i had seen aufs in the docker info output and assumed it was Ubuntu 👍
shrug... systemd's pretty dang simple if you're just using pid 1; we haven't really had many problems with it aside from the recent security-related stuff. do you have any recommendation for one that supports restarts, environment pass-through and zombie-proc handling? Edit: re: "Can you try running with -v /sys/fs/cgroup:/sys/fs/cgroup:rw", i've forwarded a link to your comment to one of our developers who's in the beta, hope to hear back soon. |
So, it exists. Still:
Testing your suggested approach (note the
That said, I think there is something funky with the host system. On one of my machines it actually works:
The issue is easy for me to reproduce. Just let me know what info you need about the two different host environments. Here are some basics: Broken host environment:
Working host environment:
Kernel issue? /beetree |
Thanks for the ping @nathwill. What ended up working on the Docker for Mac Beta is running with just the
|
@joesteele running with |
@thaJeztah gotcha. This isn't something we are/were considering for production. This is just, as you said, a workaround to facilitate setting things up for local development (and it's temporary at that). With the Docker for Mac Beta, I've been going around setting up our various services with Docker for ease in local development and I ran into this particular issue when setting up the service @nathwill was describing above. We'll probably end up moving away from our systemd approach anyhow. |
Anyone looking into this? @justincormack any more info you need from me in order to recreate this? |
;( Clearly reproduced issue, but no solution :( Is your response here that using /beetree |
Ok I did some more digging, and discovered that on Debian Jessie, this just works, but on Ubuntu 15.10, it doesn't work, even if I mount I used these steps; cat << EOF | docker build -t tester -
FROM ubuntu:16.04
RUN apt-get update
RUN apt-get install openssh-server -y
RUN systemctl enable ssh
ENTRYPOINT ["/lib/systemd/systemd"]
EOF docker run -d \
--cap-add SYS_ADMIN \
--volume /sys/fs/cgroup:/sys/fs/cgroup:ro \
--security-opt seccomp:unconfined \
--name tester \
tester
docker exec -it tester /bin/bash -c "ps -e -o uid,pid,cmd"
UID PID CMD
0 1 /lib/systemd/systemd
0 5 ps -e -o uid,pid,cmd At first, I thought that the I downgraded docker to version 1.10.3 on Ubuntu, but this did not make a change, so it's not a regression in 1.11, just a difference between these hosts. Comparing the cgroup structure between Jessie and Ubuntu 15.10 while the container is running ( On Jessie:
On Ubuntu:
Notice that there are no cgroups beneath the container-ID Wondering what could influence this, I decided to start the container with apparmor disabled; docker run -d \
--cap-add SYS_ADMIN \
--volume /sys/fs/cgroup:/sys/fs/cgroup:ro \
--security-opt seccomp:unconfined \
--security-opt apparmor:unconfined \
--name tester \
tester And success: docker exec -it tester /bin/bash -c "ps -e -o uid,pid,cmd"
UID PID CMD
0 1 /lib/systemd/systemd
0 19 /lib/systemd/systemd-journald
0 24 /usr/sbin/sshd -D
0 131 ps -e -o uid,pid,cmd Cgroups are also created now beneath the container ID;
So, it looks like there's no regression, but probably AppArmor is disabled on the host where it works, and enabled on the one where it doesn't work. Disabling AppArmor on the container (using |
I just found a build of |
I'll close this issue, because it doesn't appear to be a bug, but perhaps we should add an example to the documentation (or if someone wants to contribute that 👍 ) |
I've been running a few hundred containers with systemd in them since 1.7. The flags required has changed a little bit. In 1.10 I was adding
--cap-add=SYS_ADMIN
,--volume=/sys/fs/cgroup:/sys/fs/cgroup:ro
, and--security-opt=seccomp:unconfined
.With the same flags, it doesn't work in 1.11.
With
--volume=/sys/fs/cgroup:/sys/fs/cgroup:ro --privileged
it works.With
--volume=/sys/fs/cgroup:/sys/fs/cgroup:ro --security-opt=seccomp:unconfined
it does not work.Here's a dump of the system:
Thanks for the help!
FYI: This is the only thing I could find about the issue while Googling, and it suggests something indeed did change in 1.11: https://trello.com/c/RFUcI1eV/158-3-make-docker-systemd-cgroups-driver-work-in-1-11
The text was updated successfully, but these errors were encountered: