Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot checkpoint container: /usr/bin/nvidia-container-runtime did not terminate successfully: exit status 1 #2397

Open
dayinfinite opened this issue Apr 26, 2024 · 11 comments

Comments

@dayinfinite
Copy link

Description
k8s 1.28
containerd 2.0

I want curl k8s checkpoint to create a container checkpoint

Steps to reproduce the issue:

  1. curl -sk -X POST "https://127.0.0.1:10250/checkpoint/default/gpu-base-02/gpu-base-02" --key /etc/kubernetes/pki/apiserver-kubelet-client.key --cacert /etc/kubernetes/pki/ca.crt --cert /etc/kubernetes/pki/apiserver-kubelet-client.crt

Describe the results you received:

I want curl k8s checkpoint to create a container checkpoint

Describe the results you expected:

The actual situation is that an error occurs, showing: checkpointing of default/gpu-base-02/gpu-base-02 failed (rpc error: code = Unknown desc = checkpointing container "208a82339ddc590e460b89912304f56ad64924f89a959f982b17aeb6ab0c2aa8" failed: /usr/bin/nvidia-contain er-runtime did not terminate successfully: exit status 1: criu failed: type NOTIFY errno 0 path= /run/containerd/io.containerd.runtime.v2.task/k8s.io/208a82339ddc590e460b89912304f56ad64924f89a959f982b17aeb6ab0c2aa8/criu-dump. log: unknown)

Additional information you deem important (e.g. issue happens only occasionally):

CRIU logs and information:

CRIU full dump/restore logs:

(00.011105) mnt: Inspecting sharing on 1494 shared_id 0 master_id 0 (@./proc/sys)
(00.011109) mnt: Inspecting sharing on 1493 shared_id 0 master_id 0 (@./proc/irq)
(00.011113) mnt: Inspecting sharing on 1492 shared_id 0 master_id 0 (@./proc/fs)
(00.011116) mnt: Inspecting sharing on 1491 shared_id 0 master_id 0 (@./proc/bus)
(00.011120) mnt: Inspecting sharing on 1611 shared_id 0 master_id 13 (@./proc/driver/nvidia/gpus/0000:b1:00.0)
(00.011124) Error (criu/mount.c:1088): mnt: Mount 1611 ./proc/driver/nvidia/gpus/0000:b1:00.0 (master_id: 13 shared_id: 0) has unreachable sharing. Try --enable-external-masters.
(00.011142) net: Unlock network
(00.011146) Running network-unlock scripts
(00.011149) RPC
(00.072541) Unfreezing tasks into 1
(00.072552) Unseizing 1641382 into 1
(00.072562) Unseizing 1641424 into 1
(00.072568) Unseizing 1641533 into 1
(00.072580) Unseizing 1641475 into 1
(00.072586) Unseizing 1641500 into 1
(00.072599) Unseizing 2157578 into 1
(00.072632) Error (criu/cr-dump.c:2093): Dumping FAILED.

Output of `criu --version`:

Version: 3.18

Output of `criu check --all`:

Looks good.

Additional environment details:

@adrianreber
Copy link
Member

Checkpointing Kubernetes containers with Nvidia GPUs is not working as far as we know.

We have seen success with AMD GPUs.

@dayinfinite
Copy link
Author

Checkpointing Kubernetes containers with Nvidia GPUs is not working as far as we know.

We have seen success with AMD GPUs.
I checked that containerd caused this error, and containerd uses criu. Then you know how to skip nvidia when checkpointing nvidia gpus container

@dayinfinite
Copy link
Author

Checkpointing Kubernetes containers with Nvidia GPUs is not working as far as we know.
We have seen success with AMD GPUs.
I checked that containerd caused this error, and containerd uses criu. Then you know how to skip nvidia when checkpointing nvidia gpus container

I just want to keep the environment inside the container, mainly files

@adrianreber
Copy link
Member

Then checkpointing is the wrong approach.

@dayinfinite
Copy link
Author

Then checkpointing is the wrong approach.

What I mean is, I want to preserve the environment inside the container. After checkpointing the export, it is then built as an image. This method is very fast for building a runtime image

@adrianreber
Copy link
Member

Sorry, I do not understand what you want to do. First you said you want to checkpoint the container then you said you want to just keep the environment inside of the container.

Anyway, checkpointing containers with Nvidia GPUs does not work. You need to talk to Nvidia to enable it.

@dayinfinite
Copy link
Author

Sorry, I do not understand what you want to do. First you said you want to checkpoint the container then you said you want to just keep the environment inside of the container.

Anyway, checkpointing containers with Nvidia GPUs does not work. You need to talk to Nvidia to enable it.

thanks

@alexfrolov
Copy link

Hi!

I want to put my 2 cents here. Nvidia recently uploaded their utility (in binary only) to github called cuda-checkpoint, which provides a method for checkpointing applications when they do not have any kernel running. Actually, this method utilizes some new capabilities of the nvidia driver (550). After application's data storing in GPU has been copied to the host memory, the application can be safely dumped with criu. The restore process looks the same as in common case but after application has been restored, it need to be toggled with cuda-checkpoint.

However, to be able to use this in docker in the future, it seems that some more work has to be done. Currently, the error when docker checkpoint create is invoked looks to be related to /dev/nvidia* bindings.

(00.009747) mnt: Found /dev/null mapping for ./proc/timer_list mountpoint
(00.009750) mnt: Found /dev/null mapping for ./proc/keys mountpoint
(00.009752) mnt: Found /dev/null mapping for ./proc/kcore mountpoint
(00.009773) mnt: Found /etc/hosts mapping for ./etc/hosts mountpoint
(00.009776) mnt: Found /etc/hostname mapping for ./etc/hostname mountpoint
(00.009778) mnt: Found /etc/resolv.conf mapping for ./etc/resolv.conf mountpoint
(00.009781) mnt: Found /sys/fs/cgroup/blkio mapping for ./sys/fs/cgroup/blkio mountpoint
(00.009783) mnt: Found /sys/fs/cgroup/memory mapping for ./sys/fs/cgroup/memory mountpoint
(00.009786) mnt: Found /sys/fs/cgroup/devices mapping for ./sys/fs/cgroup/devices mountpoint
(00.009788) mnt: Found /sys/fs/cgroup/net_cls,net_prio mapping for ./sys/fs/cgroup/net_cls,net_prio mountpoint
(00.009790) mnt: Found /sys/fs/cgroup/cpu,cpuacct mapping for ./sys/fs/cgroup/cpu,cpuacct mountpoint
(00.009793) mnt: Found /sys/fs/cgroup/hugetlb mapping for ./sys/fs/cgroup/hugetlb mountpoint
(00.009795) mnt: Found /sys/fs/cgroup/perf_event mapping for ./sys/fs/cgroup/perf_event mountpoint
(00.009797) mnt: Found /sys/fs/cgroup/freezer mapping for ./sys/fs/cgroup/freezer mountpoint
(00.009799) mnt: Found /sys/fs/cgroup/cpuset mapping for ./sys/fs/cgroup/cpuset mountpoint
(00.009802) mnt: Found /sys/fs/cgroup/pids mapping for ./sys/fs/cgroup/pids mountpoint
(00.009804) mnt: Found /sys/fs/cgroup/systemd mapping for ./sys/fs/cgroup/systemd mountpoint
(00.009809) mnt: Inspecting sharing on 308 shared_id 0 master_id 0 (@./sys/firmware)
(00.009812) mnt: Inspecting sharing on 307 shared_id 0 master_id 0 (@./proc/scsi)
(00.009815) mnt: Inspecting sharing on 306 shared_id 0 master_id 0 (@./proc/timer_list)
(00.009817) mnt:        The mount 305 is bind for 306 (@./proc/keys -> @./proc/timer_list)
(00.009820) mnt:        The mount 304 is bind for 306 (@./proc/kcore -> @./proc/timer_list)
(00.009826) mnt:        The mount 340 is bind for 306 (@./dev -> @./proc/timer_list)
(00.009828) mnt: Inspecting sharing on 305 shared_id 0 master_id 0 (@./proc/keys)
(00.009831) mnt: Inspecting sharing on 304 shared_id 0 master_id 0 (@./proc/kcore)
(00.009833) mnt: Inspecting sharing on 303 shared_id 0 master_id 0 (@./proc/acpi)
(00.009835) mnt: Inspecting sharing on 302 shared_id 0 master_id 0 (@./proc/sysrq-trigger)
(00.009837) mnt:        The mount 301 is bind for 302 (@./proc/sys -> @./proc/sysrq-trigger)
(00.009840) mnt:        The mount 300 is bind for 302 (@./proc/irq -> @./proc/sysrq-trigger)
(00.009842) mnt:        The mount 299 is bind for 302 (@./proc/fs -> @./proc/sysrq-trigger)
(00.009844) mnt:        The mount 298 is bind for 302 (@./proc/bus -> @./proc/sysrq-trigger)
(00.009846) mnt:        The mount 339 is bind for 302 (@./proc -> @./proc/sysrq-trigger)
(00.009849) mnt: Inspecting sharing on 301 shared_id 0 master_id 0 (@./proc/sys)
(00.009851) mnt: Inspecting sharing on 300 shared_id 0 master_id 0 (@./proc/irq)
(00.009853) mnt: Inspecting sharing on 299 shared_id 0 master_id 0 (@./proc/fs)
(00.009855) mnt: Inspecting sharing on 298 shared_id 0 master_id 0 (@./proc/bus)
(00.009857) mnt: Inspecting sharing on 297 shared_id 0 master_id 0 (@./dev/console)
(00.009860) mnt:        The mount 341 is bind for 297 (@./dev/pts -> @./dev/console)
(00.009862) mnt: Inspecting sharing on 389 shared_id 0 master_id 14 (@./proc/driver/nvidia/gpus/0000:00:06.0)
(00.009864) Error (criu/mount.c:926): mnt: Mount 389 ./proc/driver/nvidia/gpus/0000:00:06.0 (master_id: 14 shared_id: 0) has unreachable sharing. Try --enable-external-masters.
(00.009885) Unlock network
(00.009890) Running network-unlock scripts
(00.009892)     RPC
(00.100301) Unfreezing tasks into 1
(00.100330)     Unseizing 2961004 into 1
(00.100359) Error (criu/cr-dump.c:1781): Dumping FAILED.

@alexfrolov
Copy link

Some more thoughts on that..

Another case, that can be potentially interesting, is that docker container is running application which generates periodic gpu load forking new process each time and using some process(es) containing temporary data in host memory.

process A (stores temporary data in host memory, does not have CUDA calls)
process B_1,..B_n (one-shot processes working with GPU and sending some data to process A and then terminate)

In this case snapshotting docker container would be useful to preserve state of the process A, but now it is not possible.

@adrianreber
Copy link
Member

@alexfrolov thanks for your thoughts. Today we already can checkpoint and restore amd GPU containers with Podman. So we know it is doable, but from my point of view nvidia needs to do the work to fully make work. Just like amd came along and implemented it. We are also following closely what Nvidia does with their checkpoint tool. It is extremely limited at this point but it looks promising for the future.

The actual error about the mount point looks fixable by correctly specifying all mountpoints in config.json from runc.

@avagin
Copy link
Member

avagin commented May 24, 2024

@adrianreber Nvidia chose another way to implement C/R, and there's nothing wrong with that. I looked at the https://github.com/NVIDIA/cuda-checkpoint tool, and I think we need to implement support for it in CRIU. The only thing we need to do is run this tool for all processes that use CUDA and nvml (it isn't supported yet, but they are working on that). It has to be done before the dump and after the restore.

Even without the support of this tool in CRIU, users can checkpoint/restore CUDA workloads but they will need to run this tool for CUDA processes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants