Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

generate_systemd don't work for a pod #491

Open
CyberFox001 opened this issue Oct 20, 2022 · 34 comments
Open

generate_systemd don't work for a pod #491

CyberFox001 opened this issue Oct 20, 2022 · 34 comments
Labels
enhancement New feature or request

Comments

@CyberFox001
Copy link
Contributor

Is this a BUG REPORT or FEATURE REQUEST?

/kind bug

Description

If I generate a systemd .service file for a pod, when I create the pod
before the containers creation, then systemd fail to start the
service.

I got this message:

Unable to start service pod-peertube_instance.service: Job for pod-peertube_instance.service failed because the service did not take the steps required by its unit configuration.
See "systemctl --user status pod-peertube_instance.service"

I think it's because the .service file is generated before the add of
containers to the pod. So it miss the services of the containers in
its dependencies.

If I generate the systemd .service file for this pod after the creation of the containers, in a separate task like this:

- name: Generate the .service file for the pod
  containers.podman.podman_pod:
    name: "{{ peertube_pod_name }}"
    generate_systemd:
      path: "{{ systemd_services_path }}"
      restart_policy: always
      names: true
      container_prefix: peertube

The result is the deletion of the all the containers inside the pod, except for the infra container.

In generale, the documentation about the systemd integration with pod or containers is unclear:

  • Does the generation of systemd file from a pod also generate the .service file of its containers ?
  • When do I generate the systemd files ? At the pod creation ? After the add of containers in the pod ?
  • Do I have to stop the container befor it's integration in systemd ?

Steps to reproduce the issue:

  1. On your playbook, have a list of tasks that create a pod with a few containers and generate the systemd files in the same time, like this:
# Pod creation
- name: Be sure the pod exist
  containers.podman.podman_pod:
    name: "{{ peertube_pod_name }}"
    state: created
    ports:
      - 1935:1935
      - 9000:9000
    generate_systemd:
      path: "{{ systemd_services_path }}"
      restart_policy: always
      names: true
      container_prefix: peertube
      
# Container creation
- name: Be sure the Postgresql container exist
  containers.podman.podman_container:
    pod: "{{ peertube_pod_name }}"
    name: "{{ postgres_container_name }}"
    image: "{{ postgres_image_path }}:{{ postgres_version }}"
    state: stopped
    env:
      POSTGRES_DB: "{{ postgres_database_name }}"
      POSTGRES_USER: "{{ postgres_user }}"
      POSTGRES_PASSWORD: "{{ postgres_password }}"
    volume:
      - "{{ postgres_volume_data_name }}:/var/lib/postgresql/data"
    generate_systemd:
      path: "{{ systemd_services_path }}"
      restart_policy: always
      names: true
      container_prefix: peertube

    
- name: Be sur the Redis container exist
  containers.podman.podman_container:
    pod: "{{ peertube_pod_name }}"
    name: "{{ redis_container_name }}"
    image: "{{ redis_image_path }}:{{ redis_version }}"
    state: stopped
    volume:
      - "{{ redis_volume_data_name }}:/data"
    generate_systemd:
      path: "{{ systemd_services_path }}"
      restart_policy: always
      names: true
      container_prefix: peertube


- name: Be sur the Peertube container exist
  containers.podman.podman_container:
    pod: "{{ peertube_pod_name }}"
    name: "{{ peertube_container_name }}"
    image: "{{ peertube_image_path }}:{{ peertube_version }}"
    state: stopped
    env:
      PEERTUBE_WEBSERVER_HOSTNAME: peertube.local
      PEERTUBE_WEBSERVER_PORT: 9000
      PEERTUBE_WEBSERVER_HTTPS: "false"
      PEERTUBE_DB_HOSTNAME: localhost
      PEERTUBE_DB_USERNAME: "{{ postgres_user }}"
      PEERTUBE_DB_PASSWORD: "{{ postgres_password }}"
      PEERTUBE_REDIS_HOSTNAME: localhost
      PEERTUBE_ADMIN_EMAIL: "{{ peertube_admin_email }}"
    volume:
      - "{{ peertube_volume_assets_name }}:/app/client/dist"
      - "{{ peertube_volume_data_name }}:/data"
      - "{{ peertube_volume_config_name }}:/config"
    requires:
      - "{{ postgres_container_name }}"
      - "{{ redis_container_name }}"
    generate_systemd:
      path: "{{ systemd_services_path }}"
      restart_policy: always
      names: true
      container_prefix: peertube
  1. On your playbook, reload systemd daemon and try to start the pod service:
# Systemd integration
- name: Reload the systemd daemon
  systemd:
    scope: user
    daemon_reload: yes

- name: Start and Enable the pod from systemd
  systemd:
    scope: user
    name: pod-peertube_instance.service
    state: started
    enabled: yes

Describe the results you received:

A fail at the task that start the systemd service for the pod, with the message:

Unable to start service pod-peertube_instance.service: Job for pod-peertube_instance.service failed because the service did not take the steps required by its unit configuration.
See "systemctl --user status pod-peertube_instance.service"

The command systemctl --user status pod-peertube_instance.service tell that the pod is working and journalctl have no infos.

If I stop every containers and start only the pod, the containers of the pods are not started.

Describe the results you expected:

Entire pod should be managed with the systemd .service file, like when
I generate a .service file with the command podman generate systemd.

And in generale, integration with systemd should be more simple and
with more informations in the documentation.

Additional information you deem important (e.g. issue happens only occasionally):

Version of the containers.podman collection:
Either git commit if installed from git: git show --summary
Or version from ansible-galaxy if installed from galaxy: ansible-galaxy collection list | grep containers.podman

1.9.4

Output of ansible --version:

ansible [core 2.13.4]
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/a/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.11/site-packages/ansible
  ansible collection location = /home/a/.ansible/collections:/usr/share/ansible/collections
  executable location = /usr/bin/ansible
  python version = 3.11.0rc2 (main, Sep 13 2022, 00:00:00) [GCC 12.2.1 20220819 (Red Hat 12.2.1-1)]
  jinja version = 3.0.3
  libyaml = True

Output of podman version:

Client:       Podman Engine
Version:      4.3.0-rc1
API Version:  4.3.0-rc1
Go Version:   go1.19.1
Built:        Tue Sep 27 20:06:06 2022
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.28.0-dev
  cgroupControllers:
  - cpu
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.4-2.fc37.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.4, commit: '
  cpuUtilization:
    idlePercent: 71.35
    systemPercent: 4.74
    userPercent: 23.91
  cpus: 8
  distribution:
    distribution: fedora
    variant: workstation
    version: "37"
  eventLogger: journald
  hostname: sherazad-lan
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 5.19.13-300.fc37.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 771207168
  memTotal: 16712052736
  networkBackend: cni
  ociRuntime:
    name: crun
    package: crun-1.6-2.fc37.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.6
      commit: 18cf2efbb8feb2b2f20e316520e0fd0b6c41ef4d
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +YAJL
  os: linux
  remoteSocket:
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.0-5.fc37.x86_64
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.3
  swapFree: 24975495168
  swapTotal: 25769795584
  uptime: 133h 34m 28.00s (Approximately 5.54 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /home/a/.config/containers/storage.conf
  containerStore:
    number: 1
    paused: 0
    running: 0
    stopped: 1
  graphDriverName: overlay
  graphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs-1.9-3.fc37.x86_64
      Version: |-
        fusermount3 version: 3.10.5
        fuse-overlayfs: version 1.9
        FUSE library version 3.10.5
        using FUSE kernel interface version 7.31
  graphRoot: /home/a/.local/share/containers/storage
  graphRootAllocated: 480534519808
  graphRootUsed: 219970920448
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 1
  runRoot: /run/user/1000/containers
  volumePath: /home/a/.local/share/containers/storage/volumes
version:
  APIVersion: 4.3.0-rc1
  Built: 1664301966
  BuiltTime: Tue Sep 27 20:06:06 2022
  GitCommit: ""
  GoVersion: go1.19.1
  Os: linux
  OsArch: linux/amd64
  Version: 4.3.0-rc1

Package info (e.g. output of rpm -q podman or apt list podman):

podman-4.3.0~rc1-1.fc37.x86_64

Playbok you run with ansible (e.g. content of playbook.yaml):

See above

@CyberFox001
Copy link
Contributor Author

As you need to define all the pods, containers, and their dependencies, before generating the systemd files;
It's maybe necessary to create a separate Ansible module for generating the systemd files. Instead of an option in podman_pod and podman_container.

Something like a module named podman_generate, that can generate systemd files or Kubernetes play or Specgen file. A module where, when you indicate the pod name, will generate all the systemd files (pod and containers) in one time.

@sshnaidm
Copy link
Member

Agree, we need a new module for this.

@CyberFox001
Copy link
Contributor Author

Maybe I can do it ?

Any requirement for this new module ?

@sshnaidm
Copy link
Member

Maybe I can do it ?

Any requirement for this new module ?

Would be great. You can use one of modules as an example. Like podman save/load or login/logout modules. I'll help with it if need.

@CyberFox001
Copy link
Contributor Author

Do I have tests to write ? If yes, what system do you use ?

@sshnaidm
Copy link
Member

You can write tests just as sequence of tasks, their execution in GH jobs are little bit tricky, I need to describe it properly. For now you can add them here: https://github.com/containers/ansible-podman-collections/tree/master/tests/integration/targets
And I'll add this to be executed in GH actions.

@CyberFox001
Copy link
Contributor Author

Ok, I started to work on it.

How does the module have to act, in case the user specify an option only available in podman 4.0.0 and above ?

@CyberFox001
Copy link
Contributor Author

And which minimum version of Python do I target ?

@sshnaidm
Copy link
Member

About Python version - I'd keep compatible with Python 3.6.8 as it's highest version available on CentOS 7. This module shouldn't be too complex, so worth to have it compatible as possible.
About Podman versions you have 2 ways:

  1. To rely on Podman for returning errors and just pass through them;
    rc, out, err = module.run_command(command)
    if rc != 0:
    module.fail_json(msg="Error exporting container %s: %s" % (
    module.params['container'], err))
  2. To check Podman version, and start to add arguments and their versions when they appeared while checking them for compatibility:
    def check_version(self, param, minv=None, maxv=None):
    if minv and LooseVersion(minv) > LooseVersion(
    self.podman_version):
    self.module.fail_json(msg="Parameter %s is supported from podman "
    "version %s only! Current version is %s" % (
    param, minv, self.podman_version))
    if maxv and LooseVersion(maxv) < LooseVersion(
    self.podman_version):
    self.module.fail_json(msg="Parameter %s is supported till podman "
    "version %s only! Current version is %s" % (
    param, minv, self.podman_version))

    Though in way 2 you'll duplicate work of Podman, so I'd go for just relying on Podman and its errors.

@CyberFox001
Copy link
Contributor Author

Ok, I got a working module, tested with a container. :)

I need to test it with a pod, write the doc, the integration test and I can do a pull request.

I will do it later or tomorrow.

@d-513
Copy link
Contributor

d-513 commented Oct 23, 2022

thanks for doing this, will be very useful for me

@CyberFox001
Copy link
Contributor Author

You can write tests just as sequence of tasks, their execution in GH jobs are little bit tricky, I need to describe it properly. For now you can add them here: https://github.com/containers/ansible-podman-collections/tree/master/tests/integration/targets And I'll add this to be executed in GH actions.

With which user these tests are run ?

@CyberFox001
Copy link
Contributor Author

You can write tests just as sequence of tasks, their execution in GH jobs are little bit tricky, I need to describe it properly. For now you can add them here: https://github.com/containers/ansible-podman-collections/tree/master/tests/integration/targets And I'll add this to be executed in GH actions.

Do I test every features of the module or only the generation of systemd units and its writing into a file ?

@sshnaidm
Copy link
Member

With which user these tests are run ?

With a regular user. If you need tests for root containers as well, use become like here:

- name: Test idempotency for root pods
include_tasks: root-pod.yml
vars:
ansible_python_interpreter: "/usr/bin/python"
args:
apply:
become: true

Usually all test roles are executed from here: https://github.com/containers/ansible-podman-collections/tree/master/ci/playbooks/containers

Do I test every features of the module or only the generation of systemd units and its writing into a file ?

Better to test all arguments you pass to the module and see it they are applied. And w/o arguments, just a default run. You can see as an example:

- name: Run container with systemd generation parameters
containers.podman.podman_container:
executable: "{{ test_executable | default('podman') }}"
name: container1
image: alpine
state: started
command: sleep 20m
generate_systemd:
path: /tmp/
restart_policy: always
time: 120
no_header: true
names: true
pod_prefix: whocares
separator: zzzz
container_prefix: contain
register: system1
- name: Check service file presents
stat:
path: /tmp/containzzzzcontainer1.service
register: service_file
- name: Check that container has correct systemd output
assert:
that:
- system1.podman_systemd.keys() | list | first == 'containzzzzcontainer1'
- system1.podman_systemd.values() | list | length > 0
- service_file.stat.exists | bool
- "'stop -t 120 container1' in system1.podman_systemd.values() | list | first"
- "'Restart=always' in system1.podman_systemd.values() | list | first"
- "'autogenerated by Podman' not in system1.podman_systemd.values() | list | first"

@CyberFox001
Copy link
Contributor Author

Ok, I'm close to do the pull request.

I try to run ansible-test to be sure the test I wrote is correct.
I tested manually the module and it work.

When I run:

ansible-test integration podman_generate_systemd

ansible-test respond FATAL: The current working directory must be within the source tree being tested..
But I am in the root of this project dir when I run it. I miss something ?

PS: Do I remove the systemd generate option from podman_pod and podman_container ?

@CyberFox001
Copy link
Contributor Author

CyberFox001 commented Oct 25, 2022

ansible-test respond FATAL: The current working directory must be within the source tree being tested.. But I am in the root of this project dir when I run it. I miss something ?

Ho, I get it. The path need to be ~/Projects/ansible_collections/containers/podman/ and nothing else starting from the ansible_collections/.

@sshnaidm
Copy link
Member

sshnaidm commented Oct 25, 2022

@CyberFox001 I don't use ansible-test for integration tests, only for sanity. What is name of the module? I'll prepare a starting test patch that you can include in your PR. Sorry about lack of docs, will work on it as time allows.

@CyberFox001
Copy link
Contributor Author

After I moved the project directory at the path ~/Projets/ansible_collections/containers/podman/, I run the integration test with this command from the root of the project directory:

ansible-test integration podman_generate_systemd

Add all my tests pass. :D

So, now I will do the lasts commits.

Do I need to remove the systemd generate option from podman_pod and podman_container ?

@sshnaidm
Copy link
Member

After I moved the project directory at the path ~/Projets/ansible_collections/containers/podman/, I run the integration test with this command from the root of the project directory:

ansible-test integration podman_generate_systemd

Add all my tests pass. :D

So, now I will do the lasts commits.

Do I need to remove the systemd generate option from podman_pod and podman_container ?

Great, maybe worth to do it in CI as well 😆 Let's leave these functions for now.

@CyberFox001
Copy link
Contributor Author

Ok, I make the pull request.

@CyberFox001
Copy link
Contributor Author

pull request done: #498

@CyberFox001
Copy link
Contributor Author

What is name of the module?

podman_generate_systemd

@CyberFox001
Copy link
Contributor Author

pull request done: #498

So, after the checks run, all the sanity check have errors.

The first sanity check have 2 problems: My name and a TypeError.

For the first one, is it because of the accent in my first name ?

For the TypeError in argument_spec, is it because I use a more classic way to define the dictionaries ?

@sshnaidm
Copy link
Member

  1. I think author should contain your github nickname, see how it's done here:
    author:
    - "Jason Hiatt (@jthiatt)"
    - "Clemens Lange (@clelange)"
  2. ERROR: plugins/modules/podman_generate_systemd.py:353:42: use argspec type="path" instead of type="str" to avoid use of 'expanduser' it's about this line: systemd_units_dest = os.path.expanduser(module.params.get('dest'))
  3. Please remove all trailing whitespaces from plugins/modules/podman_generate_systemd.py, you can do it in IDE settings to be automatically done.
  4. ERROR: plugins/modules/podman_generate_systemd.py:0:0: missing: __metaclass__ = type See documentation for help: https://docs.ansible.com/ansible/2.9/dev_guide/testing/sanity/metaclass-boilerplate.html
  5. ERROR: plugins/modules/podman_generate_systemd.py:221:0: traceback: TypeError: 'type' object is not subscriptable - constructions like def generate_systemd(module: AnsibleModule) -> tuple[bool, list[str], str]: are not supported
  6. ERROR: plugins/modules/podman_generate_systemd.py:0:0: missing: from __future__ import (absolute_import, division, print_function) See documentation for help: https://docs.ansible.com/ansible/2.9/dev_guide/testing/sanity/future-import-boilerplate.html
  7. ERROR: plugins/modules/podman_generate_systemd.py:0:0: doc-choices-incompatible-type: DOCUMENTATION.options.restart_policy: Argument defines choices as (False) but this is incompatible with argument type str: Value must be string for dictionary value @ data['options']['restart_policy']. Got {'description': ['Restart policy of the service'], 'type': 'str', 'required': False, 'choices': [False, 'on-success', 'on-failure', 'on-abnormal', 'on-watchdog', 'on-abort', 'always']}
 ERROR: plugins/modules/podman_generate_systemd.py:0:0: doc-default-does-not-match-spec: Argument 'new' in argument_spec defines default as (False) but documentation defines default as (None)
ERROR: plugins/modules/podman_generate_systemd.py:0:0: doc-default-does-not-match-spec: Argument 'use_names' in argument_spec defines default as (True) but documentation defines default as (None)
ERROR: plugins/modules/podman_generate_systemd.py:0:0: invalid-documentation: DOCUMENTATION.author: Invalid author for dictionary value @ data['author']. Got ['Sébastien Gendre']
ERROR: plugins/modules/podman_generate_systemd.py:0:0: nonexistent-parameter-documented: Argument 'use_name' is listed in DOCUMENTATION.options, but not accepted by the module argument_spec
ERROR: plugins/modules/podman_generate_systemd.py:0:0: parameter-type-not-in-doc: Argument 'use_names' in argument_spec defines type as 'bool' but documentation doesn't define type
ERROR: plugins/modules/podman_generate_systemd.py:0:0: undocumented-parameter: Argument 'use_names' is listed in the argument_spec, but not documented in the module documentation

If possible let's use less new Python things to keep this collection compatible with more systems in the world.

@CyberFox001
Copy link
Contributor Author

I modified the code for the firsts 6 points.

But for the 7: In the choises for restart_policy, the first one is the str 'no' and Ansible see it as the bool False. Is there a way to tell Ansible to not interpret the 'no' as a False ?

@sshnaidm
Copy link
Member

Nope, it's more YAML thing, than Ansible. Let's set there something like no-restart, it's quite clear what it means. Later in the function you can check this option specifically.

@CyberFox001
Copy link
Contributor Author

Ok, restart_policy modified. I also do modification for point 8. Hope everything is ok now.

@sshnaidm
Copy link
Member

@CyberFox001 please include this code in your patch, so that we can see your tests in CI jobs: #501

@CyberFox001
Copy link
Contributor Author

1 code modification done, but one question about the Python targeted version.

Which command can I run on my desktop to run the same checks than those made in the pull request ?

@sshnaidm
Copy link
Member

If you mean collection sanity tests, they are here:
https://github.com/sshnaidm/ansible-podman-collections/blob/gensys_ci/.github/workflows/collection-continuous-integration.yml#L95-L114

In general you install the changed collection from source to some specific directory, cd to this directory to ansible_collections/containers/podman and there run

ansible-test sanity
          --color
          --requirements
          --python "${{ matrix.python-version }}" -vvv
          plugins/ tests/

where "${{ matrix.python-version }}" is your python version. Ansible version is better take 2.9 and latest (because 2.9 is very specific).
There is also docker version of ansible-test, more about it: https://www.ansible.com/blog/introduction-to-ansible-test

@sshnaidm
Copy link
Member

@CyberFox001 I think we can close this issue then?

@CyberFox001
Copy link
Contributor Author

@sshnaidm Good question.

The issue was with the generate_systemd option of podman_container and podman_pod.
As these options are still there, with no modification, the issue is also still there.

But we can use the new podman_generate_systemd module in the next release and don't have this issue anymore with it.

I don't know what is the best choice: Close this issue when the generate_systemd option is removed from podman_pod and podman_container modules, when the podman_generate_systemd module is released or now.

What do you think ?

@shunkica
Copy link

shunkica commented Feb 13, 2024

@CyberFox001 I am having the same issue with the podman_generate_systemd module.
After creating the service file, trying to do started/enabled with systemd_service results in the same error as your original post, but running systemctl --user status shows that the service is started and enabled, despite this error.
Running the same play a second time does not result in this error, it only happens the first time.

---
- name: Test pods
  hosts: localhost
  gather_facts: false
  tasks:
    - name: Create the pod
      containers.podman.podman_pod:
        name: "test"
        state: started
    - name: Generate systemd
      containers.podman.podman_generate_systemd:
        name: "test"
        dest: "~/.config/systemd/user/"
    - name: Make sure the pod is started and enabled
      ansible.builtin.systemd_service:
        name: "pod-test"
        state: started
        enabled: yes
        daemon_reload: yes
        scope: user
PLAY [Test pods] *****************************************************************************************************************************************************************************************************************************

TASK [Create the pod] ************************************************************************************************************************************************************************************************************************
changed: [localhost]

TASK [Generate systemd] **********************************************************************************************************************************************************************************************************************
changed: [localhost]

TASK [Start the pod] *************************************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Unable to start service pod-test: Job for pod-test.service failed because the service did not take the steps required by its unit configuration.\nSee \"systemctl --user status pod-test.service\" and \"journalctl --user -xeu pod-test.service\" for details.\n"}

PLAY RECAP ***********************************************************************************************************************************************************************************************************************************
localhost                  : ok=2    changed=2    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0  

The only way I could get it to work is if i put state: created in podman_pod, and then have systemd start it.

@shunkica
Copy link

The fact that generating the systemd service for a pod also generates the systemd service for the containers is also problematic.
This way we can not fine-tune the containers systemd files, eg dependencies and start order.
There should be a way to disable this behavior.
I expect the pod to be started empty, and then I can attach or de-attach containers as I see fit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants