You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
When the healtcheck is of a container is set to the value none, the container keeps being recreated instead of it being created only once.
Steps to reproduce the issue:
Run the provided podman-demo.yml playbook (contents pasted below).
Describe the results you received:
A new container will be created (changed) by the first task. The second task will recreate the container (changed) despite having the exact same configuration. If the --diff option is provided Ansible will show that the healtcheck was empty instead of "none", hence the reason the container keeps being recreated.
Describe the results you expected:
The first task should create the container if it doesn't exist (changed). The second task should properly compare the running container with the requested properties and do nothing (ok).
Additional information you deem important (e.g. issue happens only occasionally):
There is also a --no-healthcheck option for podman, but this collection does not yet seem to support that. I could potentially make a new feature request to add that.
Version of the containers.podman collection: Either git commit if installed from git: git show --summary Or version from ansible-galaxy if installed from galaxy: ansible-galaxy collection list | grep containers.podman
ansible-galaxy collection list | grep containers.podman
containers.podman 1.10.2
containers.podman 1.10.3
Client: Podman Engine
Version: 4.7.0
API Version: 4.7.0
Go Version: go1.20.8
Built: Wed Sep 27 20:24:38 2023
OS/Arch: linux/amd64
Output of podman info --debug:
host:
arch: amd64buildahVersion: 1.32.0cgroupControllers:
- cpu
- io
- memory
- pidscgroupManager: systemdcgroupVersion: v2conmon:
package: conmon-2.1.7-2.fc38.x86_64path: /usr/bin/conmonversion: 'conmon version 2.1.7, commit: 'cpuUtilization:
idlePercent: 97.39systemPercent: 0.7userPercent: 1.91cpus: 12databaseBackend: boltdbdistribution:
distribution: fedoravariant: workstationversion: "38"eventLogger: journaldfreeLocks: 2018hostname: stroopwafelidMappings:
gidmap:
- container_id: 0host_id: 1000size: 1
- container_id: 1host_id: 100000size: 65536uidmap:
- container_id: 0host_id: 1000size: 1
- container_id: 1host_id: 100000size: 65536kernel: 6.5.7-200.fc38.x86_64linkmode: dynamiclogDriver: journaldmemFree: 16531881984memTotal: 33246187520networkBackend: netavarknetworkBackendInfo:
backend: netavarkdns:
package: aardvark-dns-1.8.0-1.fc38.x86_64path: /usr/libexec/podman/aardvark-dnsversion: aardvark-dns 1.8.0package: netavark-1.8.0-2.fc38.x86_64path: /usr/libexec/podman/netavarkversion: netavark 1.8.0ociRuntime:
name: crunpackage: crun-1.9.2-1.fc38.x86_64path: /usr/bin/crunversion: |- crun version 1.9.2 commit: 35274d346d2e9ffeacb22cc11590b0266a23d634 rundir: /run/user/1000/crun spec: 1.0.0 +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJLos: linuxpasta:
executable: /usr/bin/pastapackage: passt-0^20231004.gf851084-1.fc38.x86_64version: | pasta 0^20231004.gf851084-1.fc38.x86_64 Copyright Red Hat GNU General Public License, version 2 or later <https://www.gnu.org/licenses/old-licenses/gpl-2.0.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law.remoteSocket:
exists: falsepath: /run/user/1000/podman/podman.socksecurity:
apparmorEnabled: falsecapabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOTrootless: trueseccompEnabled: trueseccompProfilePath: /usr/share/containers/seccomp.jsonselinuxEnabled: trueserviceIsRemote: falseslirp4netns:
executable: /usr/bin/slirp4netnspackage: slirp4netns-1.2.1-1.fc38.x86_64version: |- slirp4netns version 1.2.1 commit: 09e31e92fa3d2a1d3ca261adaeb012c8d75a8194 libslirp: 4.7.0 SLIRP_CONFIG_VERSION_MAX: 4 libseccomp: 2.5.3swapFree: 8589930496swapTotal: 8589930496uptime: 3h 50m 24.00s (Approximately 0.12 days)plugins:
authorization: nulllog:
- k8s-file
- none
- passthrough
- journaldnetwork:
- bridge
- macvlan
- ipvlanvolume:
- localregistries:
search:
- docker.iostore:
configFile: /home/ingmar/.config/containers/storage.confcontainerStore:
number: 1paused: 0running: 0stopped: 1graphDriverName: overlaygraphOptions: {}graphRoot: /home/ingmar/.local/share/containers/storagegraphRootAllocated: 1022488477696graphRootUsed: 533124284416graphStatus:
Backing Filesystem: btrfsNative Overlay Diff: "true"Supports d_type: "true"Supports shifting: "false"Supports volatile: "true"Using metacopy: "false"imageCopyTmpDir: /var/tmpimageStore:
number: 7runRoot: /run/user/1000/containerstransientStore: falsevolumePath: /home/ingmar/.local/share/containers/storage/volumesversion:
APIVersion: 4.7.0Built: 1695839078BuiltTime: Wed Sep 27 20:24:38 2023GitCommit: ""GoVersion: go1.20.8Os: linuxOsArch: linux/amd64Version: 4.7.0
Package info (e.g. output of rpm -q podman or apt list podman):
podman-4.7.0-1.fc38.x86_64
Playbok you run with ansible (e.g. content of playbook.yaml):
---
# podman-demo.yml
- name: Podman healthcheck demohosts: localhostconnection: localgather_facts: truetasks:
- ansible.builtin.debug:
msg: Running the container normally
- name: Run containercontainers.podman.podman_container:
name: filebrowser-podman-demoimage: docker.io/filebrowser/filebrowser:v2state: started
- name: Run container againcontainers.podman.podman_container:
name: filebrowser-podman-demoimage: docker.io/filebrowser/filebrowser:v2state: started
- name: Remove container (return to original state)containers.podman.podman_container:
name: filebrowser-podman-demostate: absent
- ansible.builtin.debug:
msg: Running the container with healthcheck set to 'none' # https://docs.podman.io/en/latest/markdown/podman-run.1.html#health-cmd-command-command-arg1
- name: Run container with healthcheck set to 'none'containers.podman.podman_container:
name: filebrowser-podman-demoimage: docker.io/filebrowser/filebrowser:v2healthcheck: nonestate: started
- name: Run container again (This should not trigger a recreation) #containers.podman.podman_container:
name: filebrowser-podman-demoimage: docker.io/filebrowser/filebrowser:v2healthcheck: nonestate: started
- name: Remove container (return to original state)containers.podman.podman_container:
name: filebrowser-podman-demostate: absent
Command line and output of ansible run with high verbosity
Please NOTE: if you submit a bug about idempotency, run the playbook with --diff option, like:
Beside it being a feature of podman itself, (see here) I wanted to disable the healtcheck for one of my containers because it was causing a lot of log messages with the contents of container exec_died ... that comes from the healtcheck.
This is somewhat related to this issue where someone also asked for a log level feature: containers/podman#17856.
An example of the amount of logging that gets aggregated is here (left before I disabled the healtcheck and right after I set it to none)
So for me why I'm disableing this is I don't need that healtcheck (I have my own monitoring) and to reduce disk IO. But honestly I don't think my reasoning should be entirety relevant, surly someone at some point has a better reason to use this feature and I think it's nice if that is supported (with it being an intended feature after all 😄 )
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
When the healtcheck is of a container is set to the value
none
, the container keeps being recreated instead of it being created only once.Steps to reproduce the issue:
podman-demo.yml
playbook (contents pasted below).Describe the results you received:
A new container will be created (changed) by the first task. The second task will recreate the container (changed) despite having the exact same configuration. If the --diff option is provided Ansible will show that the healtcheck was empty instead of "none", hence the reason the container keeps being recreated.
Describe the results you expected:
The first task should create the container if it doesn't exist (changed). The second task should properly compare the running container with the requested properties and do nothing (ok).
Additional information you deem important (e.g. issue happens only occasionally):
There is also a
--no-healthcheck
option for podman, but this collection does not yet seem to support that. I could potentially make a new feature request to add that.Version of the
containers.podman
collection:Either git commit if installed from git:
git show --summary
Or version from
ansible-galaxy
if installed from galaxy:ansible-galaxy collection list | grep containers.podman
Output of
ansible --version
:Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Playbok you run with ansible (e.g. content of
playbook.yaml
):Command line and output of ansible run with high verbosity
Please NOTE: if you submit a bug about idempotency, run the playbook with
--diff
option, like:ansible-playbook podman-demo.yml --diff -vv
Additional environment details (AWS, VirtualBox, physical, etc.):
The text was updated successfully, but these errors were encountered: