Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When a pod has a publish property, generating the systemd unit deletes the containers #651

Open
thibaultamartin opened this issue Oct 12, 2023 · 2 comments
Labels
bug Something isn't working

Comments

@thibaultamartin
Copy link

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

Let's consider a pod created with containers.podman.podman_pod.
If the pod contains the property publish, then generating the systemd unit for this pod with generate_systemd after having added containers to that pod is going to remove the pod and the containers it contains.

Steps to reproduce the issue:

  1. Create a pod using containers.podman.podman_pod and a publish property
  2. Add one or several containers in the pod
  3. Generate a systemd unit for that pod with generate_systemd

Describe the results you received:

I would expect the pod to run with the containers it contained and publish the ports on the host.

Describe the results you expected:

The pod is recreacted and the containers it contained are wiped.

Additional information you deem important (e.g. issue happens only occasionally):

Version of the containers.podman collection:
Either git commit if installed from git: git show --summary
Or version from ansible-galaxy if installed from galaxy: ansible-galaxy collection list | grep containers.podman

containers.podman 1.9.4  

Output of ansible --version:

ansible [core 2.13.5]
  config file = None
  configured module search path = ['/Users/thib/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /opt/homebrew/lib/python3.10/site-packages/ansible
  ansible collection location = /Users/thib/.ansible/collections:/usr/share/ansible/collections
  executable location = /opt/homebrew/bin/ansible
  python version = 3.10.13 (main, Aug 24 2023, 22:36:46) [Clang 14.0.3 (clang-1403.0.22.14.1)]
  jinja version = 3.1.2
  libyaml = True

Output of podman version:

podman version 4.4.1

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.29.0
  cgroupControllers:
  - cpuset
  - cpu
  - io
  - memory
  - hugetlb
  - pids
  - rdma
  - misc
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.7-1.el9_2.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.7, commit: 606c693de21bcbab87e31002e46663c5f2dc8a9b'
  cpuUtilization:
    idlePercent: 98.5
    systemPercent: 0.25
    userPercent: 1.25
  cpus: 12
  distribution:
    distribution: '"rhel"'
    version: "9.2"
  eventLogger: journald
  hostname: ergaster.org
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 5.14.0-284.30.1.el9_2.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 4679327744
  memTotal: 50237087744
  networkBackend: cni
  ociRuntime:
    name: crun
    package: crun-1.8.4-1.el9_2.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.8.4
      commit: 5a8fa99a5e41facba2eda4af12fa26313918805b
      rundir: /run/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /bin/slirp4netns
    package: slirp4netns-1.2.0-3.el9.x86_64
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.4.0
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.2
  swapFree: 25195827200
  swapTotal: 25379729408
  uptime: 119h 7m 9.00s (Approximately 4.96 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.access.redhat.com
  - registry.redhat.io
  - docker.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 21
    paused: 0
    running: 20
    stopped: 1
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /var/lib/containers/storage
  graphRootAllocated: 1261995819008
  graphRootUsed: 778092228608
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 73
  runRoot: /run/containers/storage
  transientStore: false
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 4.4.1
  Built: 1692279033
  BuiltTime: Thu Aug 17 15:30:33 2023
  GitCommit: ""
  GoVersion: go1.19.10
  Os: linux
  OsArch: linux/amd64
  Version: 4.4.1

Package info (e.g. output of rpm -q podman or apt list podman):

podman-4.4.1-13.el9_2.x86_64

Playbok you run with ansible (e.g. content of playbook.yaml):

This is the role I'm calling from my playbook

- name: Create synapse pod
  containers.podman.podman_pod:
    name: pod-synapse
    publish:
      - "10.8.0.2:9000:9000"
    state: created

- name: Stop synapse pod
  containers.podman.podman_pod:
    name: pod-synapse
    state: stopped

- name: Create synapse's postgresql
  containers.podman.podman_container:
    name: synapse-postgres
    image: docker.io/library/postgres:{{ synapse_container_pg_tag }}
    pod: pod-synapse
    volume:
      - synapse_pg_pdata:/var/lib/postgresql/data
    env:
      {
        "POSTGRES_USER": "{{ synapse_pg_username }}",
        "POSTGRES_PASSWORD": "{{ synapse_pg_password }}",
        "POSTGRES_INITDB_ARGS": "--encoding=UTF-8 --lc-collate=C --lc-ctype=C",
      }

- name: Copy Postgres config
  ansible.builtin.copy:
    src: postgresql.conf
    dest: /var/lib/containers/storage/volumes/synapse_pg_pdata/_data/postgresql.conf

- name: Create synapse container and service
  containers.podman.podman_container:
    name: synapse
    image: docker.io/matrixdotorg/synapse:{{ synapse_container_tag }}
    pod: pod-synapse
    volume:
      - synapse_data:/data
    labels:
      {
        "traefik.enable": "true",
        "traefik.http.routers.synapse.entrypoints": "websecure",
        "traefik.http.routers.synapse.rule": "Host(`matrix.{{ base_domain }}`)",
        "traefik.http.services.synapse.loadbalancer.server.port": "8008",
        "traefik.http.routers.synapse.tls": "true",
        "traefik.http.routers.synapse.tls.certresolver": "letls",
      }
    requires:
      - synapse-postgres

- name: Copy Synapse's homeserver configuration file
  ansible.builtin.template:
    src: homeserver.yaml.j2
    dest: /var/lib/containers/storage/volumes/synapse_data/_data/homeserver.yaml

- name: Copy Synapse's logging configuration file
  ansible.builtin.template:
    src: log.config.j2
    dest: /var/lib/containers/storage/volumes/synapse_data/_data/{{ matrix_server_name}}.log.config

- name: Copy Synapse's signing key
  ansible.builtin.template:
    src: signing.key.j2
    dest: /var/lib/containers/storage/volumes/synapse_data/_data/{{ matrix_server_name }}.signing.key

- name: Generate the systemd unit for Synapse
  containers.podman.podman_pod:
    name: pod-synapse
    generate_systemd:
      path: /etc/systemd/system
      restart_policy: always

- name: Enable synapse unit
  ansible.builtin.systemd:
    name: pod-pod-synapse.service
    enabled: true
    daemon_reload: true

- name: Make sure synapse is running
  ansible.builtin.systemd:
    name: pod-pod-synapse.service
    state: started
    daemon_reload: true

Command line and output of ansible run with high verbosity

Please NOTE: if you submit a bug about idempotency, run the playbook with --diff option, like:

ansible-playbook -i inventory --diff -vv playbook.yml

When running without the publish property, the output of the systemd generation task is:

TASK [synapse : Generate the systemd unit for Synapse] *********************************************************************************************************
task path: /Users/thib/Projects/ergaster-infra/roles/synapse/tasks/main.yml:64
ok: [cloud.ergaster.org] => {"actions": [], "changed": false, "pod": {"CgroupParent": "machine.slice", "CgroupPath": "machine.slice/machine-libpod_pod_26b14ba9bd4bbfc587ee601ce9a523ce94d4ad42bd5cda20b73dc3481ea70eb6.slice", "Containers": [{"Id": "3d99760c03111e8a2701342b56f4b7fb62f817d6fce697ea50e945b3d94021f3", "Name": "synapse", "State": "running"}, {"Id": "82c1b5ccbeb44298f43d702174d860e766e777649e7b978cbfca573951088aad", "Name": "synapse-postgres", "State": "running"}, {"Id": "99d85490249289715693598410ac15da12737f9c06716d17f1e88833cf64013c", "Name": "26b14ba9bd4b-infra", "State": "running"}], "CreateCgroup": true, "CreateCommand": ["podman", "pod", "create", "--name", "pod-synapse"], "CreateInfra": true, "Created": "2023-10-12T10:53:42.729242763+02:00", "ExitPolicy": "continue", "Hostname": "", "Id": "26b14ba9bd4bbfc587ee601ce9a523ce94d4ad42bd5cda20b73dc3481ea70eb6", "InfraConfig": {"DNSOption": null, "DNSSearch": null, "DNSServer": null, "HostAdd": null, "HostNetwork": false, "NetworkOptions": null, "Networks": ["podman"], "NoManageHosts": false, "NoManageResolvConf": false, "PortBindings": {}, "StaticIP": "", "StaticMAC": "", "pid_ns": "private", "userns": "host", "uts_ns": "private"}, "InfraContainerID": "99d85490249289715693598410ac15da12737f9c06716d17f1e88833cf64013c", "Name": "pod-synapse", "NumContainers": 3, "SharedNamespaces": ["uts", "ipc", "net"], "State": "Running"}, "podman_actions": [], "podman_systemd": {"container-synapse": "# container-synapse.service\n# autogenerated by Podman 4.4.1\n# Thu Oct 12 10:55:22 CEST 2023\n\n[Unit]\nDescription=Podman container-synapse.service\nDocumentation=man:podman-generate-systemd(1)\nWants=network-online.target\nAfter=network-online.target\nRequiresMountsFor=/run/containers/storage\nBindsTo=container-synapse-postgres.service pod-pod-synapse.service\nAfter=container-synapse-postgres.service pod-pod-synapse.service\n\n[Service]\nEnvironment=PODMAN_SYSTEMD_UNIT=%n\nRestart=always\nTimeoutStopSec=70\nExecStart=/usr/bin/podman start synapse\nExecStop=/usr/bin/podman stop  \\\n\t-t 10 synapse\nExecStopPost=/usr/bin/podman stop  \\\n\t-t 10 synapse\nPIDFile=/run/containers/storage/overlay-containers/3d99760c03111e8a2701342b56f4b7fb62f817d6fce697ea50e945b3d94021f3/userdata/conmon.pid\nType=forking\n\n[Install]\nWantedBy=default.target\n", "container-synapse-postgres": "# container-synapse-postgres.service\n# autogenerated by Podman 4.4.1\n# Thu Oct 12 10:55:22 CEST 2023\n\n[Unit]\nDescription=Podman container-synapse-postgres.service\nDocumentation=man:podman-generate-systemd(1)\nWants=network-online.target\nAfter=network-online.target\nRequiresMountsFor=/run/containers/storage\nBindsTo=pod-pod-synapse.service\nAfter=pod-pod-synapse.service\n\n[Service]\nEnvironment=PODMAN_SYSTEMD_UNIT=%n\nRestart=always\nTimeoutStopSec=70\nExecStart=/usr/bin/podman start synapse-postgres\nExecStop=/usr/bin/podman stop  \\\n\t-t 10 synapse-postgres\nExecStopPost=/usr/bin/podman stop  \\\n\t-t 10 synapse-postgres\nPIDFile=/run/containers/storage/overlay-containers/82c1b5ccbeb44298f43d702174d860e766e777649e7b978cbfca573951088aad/userdata/conmon.pid\nType=forking\n\n[Install]\nWantedBy=default.target\n", "pod-pod-synapse": "# pod-pod-synapse.service\n# autogenerated by Podman 4.4.1\n# Thu Oct 12 10:55:22 CEST 2023\n\n[Unit]\nDescription=Podman pod-pod-synapse.service\nDocumentation=man:podman-generate-systemd(1)\nWants=network-online.target\nAfter=network-online.target\nRequiresMountsFor=/run/containers/storage\nWants=container-synapse.service container-synapse-postgres.service\nBefore=container-synapse.service container-synapse-postgres.service\n\n[Service]\nEnvironment=PODMAN_SYSTEMD_UNIT=%n\nRestart=always\nTimeoutStopSec=70\nExecStart=/usr/bin/podman start 26b14ba9bd4b-infra\nExecStop=/usr/bin/podman stop  \\\n\t-t 10 26b14ba9bd4b-infra\nExecStopPost=/usr/bin/podman stop  \\\n\t-t 10 26b14ba9bd4b-infra\nPIDFile=/run/containers/storage/overlay-containers/99d85490249289715693598410ac15da12737f9c06716d17f1e88833cf64013c/userdata/conmon.pid\nType=forking\n\n[Install]\nWantedBy=default.target\n"}, "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}

When running with the publish property, the output of the systemd unit generation task is:

TASK [synapse : Generate the systemd unit for Synapse] ****************************************************************************************************
task path: /Users/thib/Projects/ergaster-infra/roles/synapse/tasks/main.yml:66
changed: [cloud.ergaster.org] => {"actions": ["recreated pod-synapse"], "changed": true, "pod": {"CgroupParent": "machine.slice", "CgroupPath": "machine.slice/machine-libpod_pod_26b14ba9bd4bbfc587ee601ce9a523ce94d4ad42bd5cda20b73dc3481ea70eb6.slice", "Containers": [{"Id": "99d85490249289715693598410ac15da12737f9c06716d17f1e88833cf64013c", "Name": "26b14ba9bd4b-infra", "State": "created"}], "CreateCgroup": true, "CreateCommand": ["podman", "pod", "create", "--name", "pod-synapse"], "CreateInfra": true, "Created": "2023-10-12T10:53:42.729242763+02:00", "ExitPolicy": "continue", "Hostname": "", "Id": "26b14ba9bd4bbfc587ee601ce9a523ce94d4ad42bd5cda20b73dc3481ea70eb6", "InfraConfig": {"DNSOption": null, "DNSSearch": null, "DNSServer": null, "HostAdd": null, "HostNetwork": false, "NetworkOptions": null, "Networks": ["podman"], "NoManageHosts": false, "NoManageResolvConf": false, "PortBindings": {}, "StaticIP": "", "StaticMAC": "", "pid_ns": "private", "userns": "host", "uts_ns": "private"}, "InfraContainerID": "99d85490249289715693598410ac15da12737f9c06716d17f1e88833cf64013c", "Name": "pod-synapse", "NumContainers": 1, "SharedNamespaces": ["uts", "ipc", "net"], "State": "Created"}, "podman_actions": ["podman pod rm -f pod-synapse", "podman pod create --name pod-synapse"], "podman_systemd": {"pod-pod-synapse": "# pod-pod-synapse.service\n# autogenerated by Podman 4.4.1\n# Thu Oct 12 10:53:43 CEST 2023\n\n[Unit]\nDescription=Podman pod-pod-synapse.service\nDocumentation=man:podman-generate-systemd(1)\nWants=network-online.target\nAfter=network-online.target\nRequiresMountsFor=/run/containers/storage\nWants=\nBefore=\n\n[Service]\nEnvironment=PODMAN_SYSTEMD_UNIT=%n\nRestart=always\nTimeoutStopSec=70\nExecStart=/usr/bin/podman start 26b14ba9bd4b-infra\nExecStop=/usr/bin/podman stop  \\\n\t-t 10 26b14ba9bd4b-infra\nExecStopPost=/usr/bin/podman stop  \\\n\t-t 10 26b14ba9bd4b-infra\nPIDFile=/run/containers/storage/overlay-containers/99d85490249289715693598410ac15da12737f9c06716d17f1e88833cf64013c/userdata/conmon.pid\nType=forking\n\n[Install]\nWantedBy=default.target\n"}, "stderr": "", "stderr_lines": [], "stdout": "26b14ba9bd4bbfc587ee601ce9a523ce94d4ad42bd5cda20b73dc3481ea70eb6\n", "stdout_lines": ["26b14ba9bd4bbfc587ee601ce9a523ce94d4ad42bd5cda20b73dc3481ea70eb6"]}

Additional environment details (AWS, VirtualBox, physical, etc.):

The target server is a RHEL9 on a VPS

@thibaultamartin
Copy link
Author

For the sake of completeness I upgraded to the latest version of the collection using ansible-galaxy collection install --upgrade containers.podman, which upgraded to 1.10.3, and the same behaviour happens

@thibaultamartin
Copy link
Author

Interestingly, if I add the (same) publish property to the generate_systemd task it doesn't remove the pod. I'm in a bit of a weird situation where I need to create the pod first so I can add containers to it, and I need to give it the exact same spec when I want to generate the systemd configuration… which can only happen after I've added the containers there :)

@sshnaidm sshnaidm added the bug Something isn't working label Oct 12, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants