Skip to content

Latest commit

 

History

History
63 lines (39 loc) · 5.91 KB

known-issues.md

File metadata and controls

63 lines (39 loc) · 5.91 KB

Known Issues

Issues due to the sidecar container design

The sidecar container mode design

This section describes how the GCS FUSE sidecar container is injected and how a GCS bucket-backed volume is mounted. It helps you understand the restrictions of this sidecar container mode design.

All the Pod creation requests are monitored by a webhook controller. If the Pod annotation gke-gcsfuse/volumes: "true" is detected, the webhook will inject the sidecar container at position 0 of the regular container array by modifying the Pod spec. The Cloud Storage FUSE processes run in the sidecar container.

After the Pod is scheduled onto a node, the GCS FUSE CSI Driver node server, which runs as a privileged container on each node, opens the /dev/fuse device on the node and obtains the file descriptor. Then the CSI driver calls mount.fuse3(8) passing the file descriptor via the mount option “fd=N” to create a mount point. In the end, the CSI driver calls sendmsg(2) to send the file descriptor to the sidecar container via Unix Domain Socket (UDS) SCM_RIGHTS.

After the CSI driver creates the mount point, it will inform kubelet to proceed with the Pod startup. The containers on the Pod spec will be started up in order, so the sidecar container will be started first.

In the sidecar container, which is an unprivileged container, a process connects to the UDS and calls recvmsg(2) to receive the file descriptor. Then the process calls Cloud Storage FUSE passing the file descriptor to start to serve the FUSE mount point. Instead of passing the actual mount point path, we pass the file descriptor to Cloud Storage FUSE as it supports the magic /dev/fd/N syntax. Before the Cloud Storage FUSE takes over the file descriptor, any operations against the mount point will hang.

Since the CSI driver sets requiresRepublish: true, it periodically checks whether the GCSFuse volume is still needed by the containers. When the CSI driver detects all the main workload containers have terminated, it creates an exit file in a Pod emptyDir volume to notify the sidecar container to terminate.

Implications of the sidecar container design

Until the Cloud Storage FUSE takes over the file descriptor, the mount point is not accessible. Any operations against the mount point will hang, including stat(2) that is used to check if the mount point exists.

The sidecar container, or more precisely, the Cloud Storage FUSE process that serves the mount point needs to remain running for the full duration of the Pod's lifecycle. If the Cloud Storage FUSE process is killed, the workload application will throw IO error Transport endpoint is not connected.

The sidecar container auto-termination depends on Kubernetes API correctly reporting the Pod status. However, due to a Kubernetes issue, container status is not updated after termination caused by Pod deletion. As a result, the sidecar container may not automatically terminate in some scenarios.

Issues

Solutions

The GCS FUSE SCI Driver now utilizes the Kubernetes native sidecar container feature, available in GKE versions 1.29.3-gke.1093000 or later.

The Kubernetes native sidecar container feature introduces sidecar containers, a new type of init container that starts before other containers but remains running for the full duration of the pod's lifecycle and will not block pod termination.

Instead of injecting the sidecar container as a regular container, the sidecar container is now injected as an init container, so that other non-sidecar init containers can also use the CSI driver. Moreover, the sidecar container lifecycle, such as auto-termination, is managed by Kubernetes.

Issues in Autopilot clusters