Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Network options break idempotence for both container and pod #555

Open
BenjaminSchubert opened this issue Feb 26, 2023 · 9 comments
Open
Labels
bug/idempotency Bug related to idempotency of modules bug Something isn't working

Comments

@BenjaminSchubert
Copy link
Contributor

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

When creating a pod or a container, and providing network options, idempotence breaks and pods/containers always get restarted

Steps to reproduce the issue:

  1. Create a playbook containing:
- name: Reproduce
  hosts: localhost
  tasks:
    - name: Test
      containers.podman.podman_network:
        name: test

    - name: Test
      containers.podman.podman_container:
        name: test
        network:
          - "test:alias=testing"
        image: docker.io/containous/whoami
  1. Apply it: ansible-playbook playbook.yml
  2. And a second time: `ansible-playbook playbook.yml

Describe the results you received:
Notice that the second time, the container changed state, it should not have

Describe the results you expected:
Nothing changed, the container should not have changed

Additional information you deem important (e.g. issue happens only occasionally):
This happens also for pods

Version of the containers.podman collection:

commit 7c06ddec3b51bab66d6733ce6d78a29681246a02

Output of ansible --version:

ansible [core 2.14.2]
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/myuser/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /home/mysyer/.virtualenvs/services/lib/python3.11/site-packages/ansible
  ansible collection location = /home/myuser/.ansible/collections:/usr/share/ansible/collections
  executable location = /home/myuser/.virtualenvs/services/bin/ansible
  python version = 3.11.2 (main, Feb  8 2023, 00:00:00) [GCC 12.2.1 20221121 (Red Hat 12.2.1-4)] (/home/myuser/.virtualenvs/services/bin/python)
  jinja version = 3.1.2
  libyaml = True

Output of podman version:

podman version 4.3.1

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.28.2
  cgroupControllers:
  - memory
  - pids
  cgroupManager: cgroupfs
  cgroupVersion: v2
  conmon:
    package: conmon_2.1.3+ds1-1_amd64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.3, commit: unknown'
  cpuUtilization:
    idlePercent: 96.72
    systemPercent: 0.78
    userPercent: 2.5
  cpus: 16
  distribution:
    codename: bookworm
    distribution: debian
    version: unknown
  eventLogger: file
  hostname: instance
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 1
      size: 999
    - container_id: 1000
      host_id: 1001
      size: 64535
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 1
      size: 999
    - container_id: 1000
      host_id: 1001
      size: 64535
  kernel: 6.1.11-200.fc37.x86_64
  linkmode: dynamic
  logDriver: k8s-file
  memFree: 7267811328
  memTotal: 67124838400
  networkBackend: netavark
  ociRuntime:
    name: crun
    package: crun_1.8-1_amd64
    path: /usr/bin/crun
    version: |-
      crun version 1.8
      commit: 0356bf4aff9a133d655dc13b1d9ac9424706cac4
      rundir: /tmp/podman-run-1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  remoteSocket:
    path: /tmp/podman-run-1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns_1.2.0-1_amd64
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.4
  swapFree: 68702695424
  swapTotal: 68702695424
  uptime: 173h 29m 53.00s (Approximately 7.21 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  volume:
  - local
registries: {}
store:
  configFile: /home/podman/.config/containers/storage.conf
  containerStore:
    number: 11
    paused: 0
    running: 11
    stopped: 0
  graphDriverName: vfs
  graphOptions: {}
  graphRoot: /home/podman/.local/share/containers/storage
  graphRootAllocated: 922876903424
  graphRootUsed: 203244806144
  graphStatus: {}
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 6
  runRoot: /tmp/podman-run-1000/containers
  volumePath: /home/podman/.local/share/containers/storage/volumes
version:
  APIVersion: 4.3.1
  Built: 0
  BuiltTime: Thu Jan  1 00:00:00 1970
  GitCommit: ""
  GoVersion: go1.19.5
  Os: linux
  OsArch: linux/amd64
  Version: 4.3.1

Package info (e.g. output of rpm -q podman or apt list podman):

podman/now 4.3.1+ds1-5+b2 amd64 [installed,local]
@BenjaminSchubert
Copy link
Contributor Author

BenjaminSchubert commented Feb 26, 2023

This information is available on containers in the output of podman inspect:

"Networks": {
    "test": {
          "EndpointID": "",
          "Gateway": "10.89.3.1",
          "IPAddress": "10.89.3.4",
          "IPPrefixLen": 24,
          "IPv6Gateway": "",
          "GlobalIPv6Address": "",
          "GlobalIPv6PrefixLen": 0,
          "MacAddress": "b2:ef:7c:60:f0:bd",
          "NetworkID": "test",
          "DriverOpts": null,
          "IPAMConfig": null,
          "Links": null,
          "Aliases": [
              "testing",
              "18890fa60240"
          ]
    }
}

Though pods don't show it (their containers do).

I wonder if it would not be easier to parse the createcommand for doing all those diffs?

I am happy to help implement a fix if I know how it should be fixed

@sshnaidm sshnaidm added bug Something isn't working bug/idempotency Bug related to idempotency of modules labels Feb 26, 2023
@kristvanbesien
Copy link

I encountered the same issue. The task that creates the pod looks like this:

- name: create a pod for the services with a static ip
    containers.podman.podman_pod:
      name: services
      network:
        - "eth0_lan:ip=192.168.3.220"
      state: created

Running the playbook with --diff shows, for the second run:

TASK [create a pod for the services with a static ip] *****************************************************************************************************************************************************
--- before
+++ after
@@ -1 +1 @@
-network - ['eth0_lan']
+network - ['eth0_lan:ip=192.168.3.220']

Looks like podman_network_info parses the network options incorrectly.

@xavierog
Copy link

xavierog commented Nov 3, 2023

I am also affected by this issue.

I wonder if it would not be easier to parse the createcommand for doing all those diffs?

That very thought also occurred to me and pushed me to create the following workaround:

  1. compute a reference podman container run ... command based on the subset of arguments I use (name, hostname, volumes, etc.)
  2. fetch the create command from podman inspect (falling back to an empty array if anything goes wrong)
  3. run containers.podman.podman_container only when the commands differ

The whole thing is not exactly elegant, and it likely comes with many caveats, but here it is:

- set_fact:
    container_create_reference: |
      {% set ref = container.create_args                            %}
      {% set ccr = ["podman", "container", "run", "--name"]         %}
      {% set _ = ccr.append(ref.name)                               %}
      {% if 'hostname' in ref:                                      %}
      {%   set _ = ccr.append("--hostname")                         %}
      {%   set _ = ccr.append(ref.hostname)                         %}
      {% endif                                                      %}
      {% for key in ('network', 'publish'):                         %}
      {%   for value in ref.get(key, []):                           %}
      {%     set _ = ccr.append("--" + key)                         %}
      {%     set _ = ccr.append(value)                              %}
      {%   endfor                                                   %}
      {% endfor                                                     %}
      {% for key, value in ref.get("env", {}).items():              %}
      {%   set _ = ccr.append("--env")                              %}
      {%   set _ = ccr.append(key + "=" + value)                    %}
      {% endfor                                                     %}
      {% if 'restart_policy' in ref:                                %}
      {%   set _ = ccr.append("--restart=" + ref.restart_policy)    %}
      {% endif                                                      %}
      {% for value in ref.get("volumes", []):                       %}
      {%   set _ = ccr.append("--volume")                           %}
      {%   set _ = ccr.append(value)                                %}
      {% endfor                                                     %}
      {% set _ = ccr.append("--detach=True")                        %}
      {% set _ = ccr.append(ref.image)                              %}
      {{ ccr }}

- block:
  - shell: |-
      podman container inspect '{{container.create_args.name}}' | jq -c '.[0].Config.CreateCommand'
    changed_when: False
    register: create_command
  rescue:
  - set_fact:
      create_command:
        stdout: '[]'

- when:
  - 'create_command.stdout|from_json != container_create_reference'
  containers.podman.podman_container:
    state: started
  args: '{{ container.create_args }}'

@sshnaidm
Copy link
Member

@BenjaminSchubert @kristvanbesien @xavierog should be fixed for containers by #745
Please reopen if you still see the issue.

@xavierog
Copy link

should be fixed for containers by #745

I have not tested yet, but a brief look at https://github.com/containers/ansible-podman-collections/pull/745/files reflects only one item that might be related with this issue: the addition of the ip6 parameter.
But the idempotency of the network option is way more complex.

@sshnaidm could you (briefly) clarify what #745 brings to this issue?

sshnaidm added a commit to sshnaidm/ansible-podman-collections that referenced this issue May 23, 2024
Related: containers#555
Signed-off-by: Sagi Shnaidman <sshnaidm@redhat.com>
@sshnaidm
Copy link
Member

@xavierog Ok, for networking I need to do more. How about this? #756

@sshnaidm sshnaidm reopened this May 23, 2024
@sshnaidm
Copy link
Member

I run:

tasks:

    - name: Test
      containers.testpodman.podman_network:
        name: test

    - name: Test
      containers.testpodman.podman_container:
        name: test
        network:
          - "test:alias=testing"
        image: docker.io/containous/whoami

    - name: Test
      containers.testpodman.podman_network:
        name: test

    - name: Test
      containers.testpodman.podman_container:
        name: test
        network:
          - "test:alias=testing"
        image: docker.io/containous/whoami

and seems ok:

PLAY [all] ****************************************************************************************************************************************************************

TASK [Test] ***************************************************************************************************************************************************************
changed: [localhost]

TASK [Test] ***************************************************************************************************************************************************************
changed: [localhost]

TASK [Test] ***************************************************************************************************************************************************************
ok: [localhost]

TASK [Test] ***************************************************************************************************************************************************************
ok: [localhost]

I completely gave up on networking calculation from inspection, it changes drastically from version to version, and just look at CreateCommand. What you provide in args is what you get.

sshnaidm added a commit to sshnaidm/ansible-podman-collections that referenced this issue May 23, 2024
Related: containers#555
Signed-off-by: Sagi Shnaidman <sshnaidm@redhat.com>
@xavierog
Copy link

This new commit looks definitely better -- I'll give it a try (likely on Friday).

sshnaidm added a commit that referenced this issue May 23, 2024
Related: #555

Signed-off-by: Sagi Shnaidman <sshnaidm@redhat.com>
@xavierog
Copy link

xavierog commented May 24, 2024

@sshnaidm It works! After git-cloning your latest commits, I removed my 58-line long container_create_reference hack and ran ansible-playbook -C (with other suitable arguments, of course), and all of my existing, running containers were marked as ok (as opposed to changed).
Thanks for that fix. Additionally, the introduction of the umask parameter should also make my inventory simpler and clearer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug/idempotency Bug related to idempotency of modules bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants