Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Containers don't work with Podman instead of Docker #201

Closed
vorburger opened this issue Mar 13, 2020 · 23 comments
Closed

Containers don't work with Podman instead of Docker #201

vorburger opened this issue Mar 13, 2020 · 23 comments
Labels
bug Something isn't working

Comments

@vorburger
Copy link
Contributor

I personally prefer using https://podman.io instead of Docker (because containers dont' run as root, on the host; although you can still be root inside the container..), and when I tried that with this project (FYI they have a podman-compose which, in my experience, is reasonably compatible with docker-compose) but I've noticed that the images of this project don't yet work with Podman instead of Docker. The web container for example failed with the error below (I hadn't even checked the others.)

The short-term workaround is, of course, to just use Docker instead of Podman for now, but I thought I'd at least just let you know about this by filing this issue here.

Glancing over this project, I've noticed open PRs #192 and #126, it's possible they help with this.

@saghul more of an FYI

[s6-init] making user provided files available at /var/run/s6/etc...                                                                                                                                                 
[s6-init] ensuring user provided files have correct perms...                                                                                                                                                         
[fix-attrs.d] applying ownership & permissions fixes...                                                                                                                                                              
[fix-attrs.d] done.                                                                                                                                                                                                  
[cont-init.d] executing container initialization scripts...                                                                                                                                                          
[cont-init.d] 01-set-timezone: executing...                                                                                                                                                                          
[cont-init.d] 01-set-timezone: exited 0.                                                                                                                                                                             
[cont-init.d] 10-config: executing...                                                                                                                                                                                
mkdir: cannot create directory '/config/nginx': Permission denied                                                                                                                                                    
mkdir: cannot create directory '/config/keys': Permission denied                                                                                                                                                     
generating self-signed keys in /config/keys, you can replace these with your own keys if required                                                                                                                    
Generating a RSA private key                                                                                                                                                                                         
........................++++                                                                                                                                                                                         
....................................................................................................++++                                                                                                             
writing new private key to '/config/keys/cert.key'                                                                                                                                                                   
req: Can't open "/config/keys/cert.key" for writing, No such file or directory                                                                                                                                       
Can't open /config/nginx/dhparams.pem for writing, No such file or directory                                                                                                                                         
140389928026176:error:02001002:system library:fopen:No such file or directory:../crypto/bio/bss_file.c:74:fopen('/config/nginx/dhparams.pem','w')                                                                    
140389928026176:error:2006D080:BIO routines:BIO_new_file:no such file:../crypto/bio/bss_file.c:81:                                                                                                                   
cp: cannot create regular file '/config/nginx/nginx.conf': No such file or directory                                                                                                                                 
/var/run/s6/etc/cont-init.d/10-config: line 44: /config/nginx/meet.conf: No such file or directory
/var/run/s6/etc/cont-init.d/10-config: line 48: /config/nginx/ssl.conf: No such file or directory
/var/run/s6/etc/cont-init.d/10-config: line 52: /config/nginx/site-confs/default: No such file or directory
cp: cannot create regular file '/config/config.js': Permission denied
sed: can't read /config/config.js: No such file or directory
cp: cannot create regular file '/config/interface_config.js': Permission denied
sed: can't read /config/interface_config.js: No such file or directory
[cont-init.d] 10-config: exited 2.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
nginx: [emerg] open() "/config/nginx/nginx.conf" failed (2: No such file or directory)
nginx: [emerg] open() "/config/nginx/nginx.conf" failed (2: No such file or directory)
@saghul
Copy link
Member

saghul commented Mar 13, 2020

Thanks for the report! Yeah #192 should solve this. We're kinda busy now, hopefully I can take a look at it and make more progress soon.

@vorburger
Copy link
Contributor Author

Actually I got confused and mixed something up myself... the problem here is likely NOT the use of root inside the container, and #192 may not help. This is only about the volumes in docker-compose.yml, and :Z selinux suffix may be all that's needed here..

I'll try this out (after I'm done with an initial basic set-up using docker instead podman).

@saghul
Copy link
Member

saghul commented Mar 13, 2020

Oh, nice! Let us know how that goes! If you feel like it, you could share your podman-compose and add it here: https://github.com/jitsi/docker-jitsi-meet/tree/dev/examples

vorburger added a commit to vorburger/docker-jitsi-meet that referenced this issue Mar 13, 2020
This helps with, but doesn't entirely fix just yet,
running this project on Fedora using `podman up -d` instead of Docker.

This should not break regular `docker-compose` on systems without SELinux.
@vorburger
Copy link
Contributor Author

share your podman-compose

That's the beauty of it - there is no podman-compose YAML, podman-compose can just use docker-compose.yml !

With PR #204 and after sudo sysctl net.ipv4.ip_unprivileged_port_start=80 it goes much further, but the web container still doesn't entirely manage to start and now fails with:

[cont-init.d] 10-config: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
nginx: [emerg] invalid number of arguments in "proxy_pass" directive in /config/nginx/meet.conf:31
nginx: [emerg] invalid number of arguments in "proxy_pass" directive in /config/nginx/meet.conf:31
nginx: [emerg] invalid number of arguments in "proxy_pass" directive in /config/nginx/meet.conf:31
nginx: [emerg] invalid number of arguments in "proxy_pass" directive in /config/nginx/meet.conf:31

Need to further investigate that some other time.

@saghul
Copy link
Member

saghul commented Mar 14, 2020

Nice! I think the problem might be that we use a user defined network to route traffic across containers using a FQDN that doesn’t exist: xmpp.meet.jitsi

@sapkra
Copy link
Contributor

sapkra commented Mar 24, 2020

The same errors are occurring in this issue: #254

@saghul
Copy link
Member

saghul commented Mar 24, 2020

@sapkra Why do you think those are related?

@sapkra
Copy link
Contributor

sapkra commented Mar 25, 2020

Just because the error is the same. Maybe it has the same reason...maybe not.

nginx: [emerg] invalid number of arguments in "proxy_pass" directive in /config/nginx/meet.conf:31

@saghul
Copy link
Member

saghul commented Mar 25, 2020

While the error is the same (the templating failed) it could have been triggered for unrelated reasons I think.

@Tinigriffy
Copy link

Tinigriffy commented Apr 4, 2020

Just because the error is the same. Maybe it has the same reason...maybe not.

nginx: [emerg] invalid number of arguments in "proxy_pass" directive in /config/nginx/meet.conf:31

I had the same issue and I resolve it by adding env_file. See my updated docker-compose.yml
Do not forget to clean the configuration directory before trying it.

You also may need to change the owner of jitsi-meet-cfg/prosody/data to deal with the none root prosody daemon: podman unshare chown 101:102 jitsi-meet-cfg/prosody/data

@supermar1010
Copy link

@Tinigriffy I just tried that, now I have the nginx: [emerg] host not found in upstream "xmpp.meet.jitsi" in /config/nginx/meet.conf:35 error message :(

@Tinigriffy
Copy link

@Tinigriffy I just tried that, now I have the nginx: [emerg] host not found in upstream "xmpp.meet.jitsi" in /config/nginx/meet.conf:35 error message :(

Did you clean the configuration folder before running this version? Do you lauch the yaml in the directory where is the .env?
By the way here is the very last version of the yaml I used
docker-compose.txt
Probably it doesn not change anything compare to the previous version, but this is the one I really used to launch everything. With rootless env everything started but I could not have a working video call with 2 browsers open on the same server. Probably a nat/forwarding issue. So I have it run with root env.

@supermar1010
Copy link

Ah I see what I was missing the extra hosts thing, I'll try this thanks for the reply :)

@ReveredMachine
Copy link

ReveredMachine commented Apr 27, 2020

Currently I am creating files to create podman images in order to later on run the containers with the systemd.

But when I run a prosody container, I get the following errors
var/run/s6/etc/cont-init.d/10-config: line 4: /usr/bin/tpl: Permission denied ... /var/run/s6/etc/cont-init.d/10-config: line 30: /usr/bin/tpl: Permission denied /var/run/s6/etc/cont-init.d/10-config: line 31: /usr/bin/tpl: Permission denied
What could be the reason for the Permission denied ?

@bcstinch
Copy link

Just a heads up I created a full guide for this as I was tired of not finding good information:
http://tendie.haus/how-to-setup-a-basic-jitsi-instance-with-podman/

@sapkra sapkra added the bug Something isn't working label Oct 8, 2020
@cbs228
Copy link

cbs228 commented Dec 22, 2020

podman pod create --name jitsi
    --add-host=meet.jitsi:127.0.0.1
    --add-host=jvb.meet.jitsi:127.0.0.1
    --add-host=jicofo.meet.jitsi:127.0.0.1
    --add-host=jigasi.meet.jitsi:127.0.0.1
    --add-host=jvb.meet.jitsi:127.0.0.1
    --add-host=xmpp.meet.jitsi:127.0.0.1
    --add-host=auth.meet.jitsi:127.0.0.1
    --add-host=auth.meet.jitsi:127.0.0.1
    --add-host=muc.meet.jitsi:127.0.0.1
    --add-host=internal-muc.meet.jitsi:127.0.0.1
    --add-host=guest.meet.jitsi:127.0.0.1
    --add-host=recorder.meet.jitsi:127.0.0.1
    --add-host=etherpad.meet.jitsi:127.0.0.1

The easiest way to get up and running might be to create a pod. Pods have a single IP address. Communicating between containers in a pod is as easy as using localhost—no need to expose ports or configure dnsmasq DNS resolution. I've added host aliases above so that names used within jitsi resolve to localhost. You'll probably want to either --publish-port to expose 80 and 443 OR join your pod to a CNI --network. When you create your other containers, join them to this pod.

Then a

podman generate systemd --name --files jitsi

will generate unit files for everything in the pod.

I generally prefer to create my infrastructure in ansible, rather than via docker-compose, so I just did that. Maybe there is a way to do this with podman-compose too.

I would very much prefer it if these containers could be made to start without root. There is no reason why jicofo and friends need root. The nginx instance doesn't need root either if you already have a TLS proxy/gateway and don't want to bind to port 80. These images all appear to use the s6-overlay, however, and they won't start without root. It does take a little work up-front to get the SELinux and DAC permissions right on your host volumes… but if you do, many containers can be started with --user some_service_user --userns=keep-id or the like.

Last time I checked, inter-container DNS resolution is still not enabled by default on podman's default internal network. This causes problems between containers that connect to each other by name. You can podman network create a new one, and it will have dnsmasq DNS by default. Join to it with --network. Vexingly, nginx doesn't respect /etc/resolv.conf, and you usually need to provide a resolver configuration directive with the gateway IP of your --network. And that still seems iffy to me; I've had nginx continue to use stale IPs after container restarts. Pods are better.

@grrvs
Copy link

grrvs commented Dec 3, 2021

I followed the Quick start of the Self-Hosting Guide - Docker up to point 6 and managed to start a jitsi instance passing extra arguments to podman-compose 🎉

podman-compose --podman-run-args "--env-file .env --add-host xmpp.meet.jitsi:127.0.0.1" up -d

<tl;dr>

For what it's worth I went down the podman-compose up road yesterday and like to wrap up what I have learned:

I had the same issue and I resolve it by adding env_file. See my updated docker-compose.yml Do not forget to clean the configuration directory before trying it. [...]

If anyone likes to reproduce, this my approach using ansible: grrvs/ansible_podman-compose_jitsi

@alexmaras
Copy link

alexmaras commented Jun 2, 2023

For anybody looking to do this now with rootless podman, there are a bunch of differences compared to the older answers with newer versions of podman and podman-compose. The most notable are:

  1. podman-compose doesn't use an infra container in pods - this means the localhost thing doesn't work anymore unless you enable infra explicitly, which does make port forwarding a bit difficult
  2. addressing using hostnames works fine in a pod, so using localhost becomes a bit redundant

The way I did it was using the following docker compose:

version: '3'

services:
    # Frontend
    web:
        image: docker.io/jitsi/web:unstable
        ports:
            - '192.168.X.X:${HTTPS_PORT}:443'
        volumes:
            - ${CONFIG}/web:/config:Z
            - ${CONFIG}/web/letsencrypt:/etc/letsencrypt:Z
            - ${CONFIG}/transcripts:/usr/share/jitsi-meet/transcripts:Z
        env_file:
            - ./.env

    # XMPP server
    prosody:
        image: docker.io/jitsi/prosody:unstable
        expose:
            - '5222'
            - '5347'
            - '5280'
        volumes:
            - ${CONFIG}/prosody:/config:Z
        env_file:
            - ./.env

    # Focus component
    jicofo:
        image: docker.io/jitsi/jicofo:unstable
        volumes:
            - ${CONFIG}/jicofo:/config:Z
        env_file:
            - ./.env
        depends_on:
            - prosody
            
    # Video bridge
    jvb:
        image: docker.io/jitsi/jvb:unstable
        ports:
            - '192.168.X.X:${JVB_PORT}:${JVB_PORT}/udp'
        volumes:
            - ${CONFIG}/jvb:/config:Z
        env_file:
            - ./.env
        depends_on:
            - prosody

And the relevant parts of the .env are:

# Exposed HTTP port
HTTPS_PORT=42445

# System time zone
TZ=Australia/Perth
CONFIG=./config

XMPP_SERVER=prosody
XMPP_PORT=5222
XMPP_BOSH_URL_BASE=http://prosody:5280
JVB_PORT=10000
JVB_WS_SERVER_ID=jvb

# Public URL for the web service (required)
PUBLIC_URL=https://public-url-here

JVB_ADVERTISE_IPS=internal-ip,external-ip,listed-here

I had to dig through things a bit to figure some stuff out, but reasoning is as follows:

  • XMPP_SERVER: set to the hostname in the docker-compose file so that the containers address it using the internal name
  • XMPP_BOSH_URL_BASE: also set to the hostname in the docker-compose file - this is important, because the web container does not use the XMPP_SERVER env variable to build the XMPP_BOSH_URL_BASE, so the reverse proxy (from web to prosody) internally doesn't work as default if you're not using xmpp.meet.jitsi as the hostname for prosody
  • JVB_WS_SERVER_ID: this isn't really that important, but it made it so that browsers don't get the internal IP of the JVB container. Again. not important, but it's nice to have it consistent
  • JVB_ADVERTISE_IPS: this helps with the slipr4netns networking stuff - if you're behind a NAT and want your instance to be available to clients both inside and outside of the NAT, you can set multiple IPs here. I set the IP of the server that houses my jitsi instance, as well as my external static IP.
  • Setting the IPs in the port forwarding (i.e. '192.168.X.X:${JVB_PORT}:${JVB_PORT}/udp') - this part is only relevant if you have multiple IPs on the host machine you're running podman on. I had packets coming into one addressing and coming out of the other. That broke things with JVB. Binding it to a specific IP fixed that.

My setup is with a reverse proxy at the edge of my network handling SSL termination with it reverse proxying to the web container, and port 10000 punched through the firewall going straight to my server. I've tested this with 2 devices connected internally, 2 devices connected externally, with screen sharing and 3 video streams. All working properly.

@stfnw
Copy link

stfnw commented Nov 19, 2023

Just to add another recent data point: hosting Jitsi meet with rootless podman and podman-compose worked for me out of the box.
Following the self-hosting guide and using the default docker-compose.yml was enough.

Some information about my system/setup:
  • Current Arch Linux (kernel 6.6.1-arch1-1); with podman / podman-compose and dependencies from the default repos

  • Current Jitsi Meet release from https://github.com/jitsi/docker-jitsi-meet/releases/tag/stable-9078

  • Started with podman-compose --podman-run-args '--env-file .env' up

  • Excerpt from podman info:

    host:
      arch: amd64
      cgroupManager: systemd
      cgroupVersion: v2
      conmon:
        package: /usr/bin/conmon is owned by conmon 1:2.1.8-1
        path: /usr/bin/conmon
        version: 'conmon version 2.1.8, commit: 00e08f4a9ca5420de733bf542b930ad58e1a7e7d'
      kernel: 6.6.1-arch1-1
      networkBackend: netavark
      networkBackendInfo:
        backend: netavark
        dns:
          package: /usr/lib/podman/aardvark-dns is owned by aardvark-dns 1.8.0-1
          path: /usr/lib/podman/aardvark-dns
          version: aardvark-dns 1.8.0
        package: /usr/lib/podman/netavark is owned by netavark 1.8.0-1
        path: /usr/lib/podman/netavark
        version: netavark 1.8.0
      ociRuntime:
        name: crun
        package: /usr/bin/crun is owned by crun 1.11.2-1
        path: /usr/bin/crun
        version: |-
          crun version 1.11.2
          commit: ab0edeef1c331840b025e8f1d38090cfb8a0509d
          rundir: /run/user/1000/crun
          spec: 1.0.0
          +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
      os: linux
      slirp4netns:
        executable: /usr/bin/slirp4netns
        package: /usr/bin/slirp4netns is owned by slirp4netns 1.2.2-1
        version: |-
          slirp4netns version 1.2.2
          commit: 0ee2d87523e906518d34a6b423271e4826f71faf
          libslirp: 4.7.0
          SLIRP_CONFIG_VERSION_MAX: 4
          libseccomp: 2.5.4
      graphDriverName: btrfs
      graphOptions: {}
      graphStatus:
        Build Version: Btrfs v6.5.3
        Library Version: "102"
    version:
      APIVersion: 4.7.2
      Built: 1698787144
      BuiltTime: Tue Oct 31 21:19:04 2023
      GitCommit: 750b4c3a7c31f6573350f0b3f1b787f26e0fe1e3-dirty
      GoVersion: go1.21.3
      Os: linux
      OsArch: linux/amd64
      Version: 4.7.2

@nykula
Copy link

nykula commented Mar 14, 2024

On Debian Bookworm (stable) with podman 4.3.1, podman-compose 1.0.3 and apache 2.4.57 reverse proxy, only the config by @alexmaras + s/unstable/stable-9258/g + s/192.168.X.X://g + COLIBRI_WEBSOCKET_REGEX=jvb results in a working conference with working video in three tabs. However, the first WebSocket connection from Firefox 123 to jvb fails with an error about bridge channel disconnected, despite netcat saying the UDP port 10000 is open. Then it retries the connection and seems to succeed. I didn't find anything relevant in podman, apache or console logs.

@bgrozev
Copy link
Member

bgrozev commented Mar 14, 2024

However, the first WebSocket connection from Firefox 123 to jvb fails with an error about bridge channel disconnected, despite netcat saying the UDP port 10000 is open. Then it retries the connection and seems to succeed. I didn't find anything relevant in podman, apache or console logs.

Interesting. Is this reproducible? Is it just the first connection after the bridge is restarted, or the first connection any time you open a conference in firefox? The WebSocket to JVB uses TLS/443, not UDP/10000 (not sure exactly how it's routed with the default docker/podman setup), but I don't see why it would fail initially and then succeed.

@nykula
Copy link

nykula commented Mar 14, 2024

I see the mentioned bridge error message during every conference, at the moment the other person joins. According to the Network tab of the inspector, the first request to /colibri-ws/jvb/ gets ns_error_websocket_connection_refused (on the TLS port indeed, I now see it matches a 502 status code in Apache logs), then another request gets 101 Switching Protocols and continues. Here are my virtual host lines for the reverse proxy:

        SSLProxyEngine on
        ProxyPreserveHost on
        ProxyTimeout 900
        ProxyPass / https://localhost:8443/ upgrade=websocket
        ProxyPassReverse / https://localhost:8443/

Replacing the upgrade=websocket with a separate ProxyPass for /xmpp-websocket wss and another one for /colibri-ws/, followed by reloading the apache2 service, results in the same behavior. Regarding how reproducible the environment is--not much, it's a Debian server manually configured over SSH, unattended upgrades enabled for latest stable, no third-party repositories added system-wide, running a few other rootless podman-compose and Node.js apps.

@saghul
Copy link
Member

saghul commented Mar 26, 2024

Closing as per #201 (comment)

@saghul saghul closed this as completed Mar 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests