Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

wants tmpfs to be dict, not list #4140

Open
7 tasks done
RanabirChakraborty opened this issue Feb 16, 2024 · 8 comments
Open
7 tasks done

wants tmpfs to be dict, not list #4140

RanabirChakraborty opened this issue Feb 16, 2024 · 8 comments
Labels

Comments

@RanabirChakraborty
Copy link

Prerequisites

  • This was not already reported in the past (duplicate check)
  • It does reproduce it with code from main branch (latest unreleased version)
  • I include a minimal example for reproducing the bug
  • The bug is not trivial, as for those a direct pull-request is preferred
  • Running pip check does not report any conflicts
  • I was able to reproduce the issue on a different machine
  • The issue is not specific to any driver other than 'default' one

Environment

Rhel8

What happened

After the latest molecule release, we are facing an issue wants tmpfs to be dict, not list.

You can find the error details here - https://github.com/jboss-set/zeus/actions/runs/7929838122/job/21650874109?pr=240#step:6:90

Reproducing example

platforms:
  - name: instance
    image: registry.access.redhat.com/ubi8/ubi-init
    tmpfs:
      - /run
      - /tmp
    volumes:
      - /sys/fs/cgroup:/sys/fs/cgroup:ro
@safeaim
Copy link

safeaim commented Feb 16, 2024

We had the same issue last week. While digging into this, it turned out that the Podman module that is a requirement for Molecule, is requiring the tmpfs parameter to be a dictionary. 1
So the fix is to supply the parameter a dict. Also the 2 docs for Molecule should be updated to reflect this change.
From what I could find, the Podman container module has required the tmpfs to be a dictionary for at least 4 years 3, so I'm surprised no one has encountered this issue untill now.

Footnotes

  1. https://docs.ansible.com/ansible/latest/collections/containers/podman/podman_container_module.html#parameter-tmpfs

  2. https://ansible.readthedocs.io/projects/molecule/guides/systemd-container/

  3. https://github.com/containers/ansible-podman-collections/blame/efbfba7c3c4ed95bb75fcabfced61f650b28bac8/plugins/modules/podman_container.py#L849

@mafalb
Copy link

mafalb commented Feb 16, 2024

There's another solution.

Note that podman has a --systemd switch
with that a lot of things happens implicitly, you don't need to specify tmpfs no more.

platforms:
  - name: instance
    image: registry.access.redhat.com/ubi8/ubi-init
    systemd: always
    volumes:
      - /sys/fs/cgroup:/sys/fs/cgroup:ro
    ...

@RanabirChakraborty
Copy link
Author

@safeaim Thank you for the findings.
@mafalb I also did the same in our project, but my real concern is if we are facing the issue then it needs to be well documented. But I wasn't able to find it in any documentation, I could be wrong as well.

@mafalb
Copy link

mafalb commented Feb 19, 2024

@mafalb I also did the same in our project, but my real concern is if we are facing the issue then it needs to be well documented. But I wasn't able to find it in any documentation, I could be wrong as well.

Search for --systemd in the manpage
https://docs.podman.io/en/latest/markdown/podman-run.1.html

@RanabirChakraborty
Copy link
Author

So if I want to work tmpfs file systems as directories, then how should I write the above Reproducing example. I have given it a try like the one below

    tmpfs:
      "/tmp": "exec"
      "/run": "rw,noexec,nosuid,nodev"

But it didn't work.

@ssbarnea ssbarnea added bug and removed new labels Feb 28, 2024
@perobertson
Copy link

This is related to: ansible-community/molecule-plugins#242

The only workaround Ive found so far is to pin to 'molecule-plugins[podman]==23.5.0', but thats not a great long term solution.

@perobertson
Copy link

There's another solution.

Note that podman has a --systemd switch with that a lot of things happens implicitly, you don't need to specify tmpfs no more.

platforms:
  - name: instance
    image: registry.access.redhat.com/ubi8/ubi-init
    systemd: always
    volumes:
      - /sys/fs/cgroup:/sys/fs/cgroup:ro
    ...

This solution worked for me with the addition that the image you are testing needs to use systemd as the command to run

platforms:
  - name: instance
    image: docker.io/fedora:40

    # Make some changes to the base image and install systemd
    dockerfile: Dockerfile.j2

    # use systemd as init system and mount required directories
    # https://docs.podman.io/en/latest/markdown/podman-run.1.html#systemd-true-false-always
    systemd: true

    # explicitly run systemd as Pid 1
    command: /sbin/init

systemd: true was supposed to set the command to run, but it was not being picked up for me and I needed to explicitly set it.

To test if its working you could try running a task like this

- become: true
  ansible.builtin.systemd_service:
    daemon_reload: true

@mafalb
Copy link

mafalb commented Apr 27, 2024

systemd: true was supposed to set the command to run, but it was not being picked up for me and I needed to explicitly set it.

It depends how the container was built. systemd: true is not supposed to set the command to run [1]. fedora40 is built with a default CMD /bin/bash, so you have to override it to get systemd. On the contrary, ubi8/ubi-init (from my example above) is built with a default CMD /sbin/init and in that case you don't need to specify command.

[1] from podman-run(1)

true enables systemd mode only when the command executed inside the container is systemd, /usr/sbin/init, /sbin/init or /usr/local/sbin/init.

always enforces the systemd mode to be enabled.

But neither --systemd true nor --systemd always is changing command or entrypoint.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: No status
Development

No branches or pull requests

5 participants