Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

securityContext capabilities.drop #1249

Open
sjthespian opened this issue Apr 25, 2024 · 3 comments
Open

securityContext capabilities.drop #1249

sjthespian opened this issue Apr 25, 2024 · 3 comments

Comments

@sjthespian
Copy link

sjthespian commented Apr 25, 2024

I am trying to install kuberhealthy into a "hardened" Kubernetes 1.28.8 cluster, and the pods are failing to start because container "kuberhealthy" must set securityContext.capabilities.drop=["ALL"]. While I can set the rest of the required securityContext settings via. the helm chart values file, the capabilities key is missing from the securityContext set in deployment.yaml. (

securityContext:
runAsNonRoot: {{ .Values.securityContext.runAsNonRoot }}
runAsUser: {{ .Values.securityContext.runAsUser }}
allowPrivilegeEscalation: {{ .Values.securityContext.allowPrivilegeEscalation }}
readOnlyRootFilesystem: {{ .Values.securityContext.readOnlyRootFilesystem }}
{{- if .Values.securityContext.seccompProfile }}
seccompProfile:
{{- toYaml .Values.securityContext.seccompProfile | nindent 12 }}
{{- end }}
)

Have there been any thoughts to either adding the capabilities key to the securityContext, or adding the ability to have an arbitrary map of values to be added?

Thanks!

@PWM-BIA-TPA-PIE
Copy link

I too am having this issue with Kuberhealthy on a CIS hardened Kubernetes cluster. We are unable to set the capabilities because that option is unavailable in the deployment template, as you have pointed out, @sjthespian.

@sjthespian
Copy link
Author

Here's a PR that I believe should address this: #1250

@peschmae
Copy link
Contributor

Beeing able to add the securityContext to the inital pod that is spawned, only solves half the issue sadly :(

As mentioned in #1243 the pods generated by the checks (through the daemonset check, or deployment check), don't have a full security context set either and will fail on a restricted cluster as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants