-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support the user flag from docker exec in kubectl exec #30656
Comments
SGTM. @kubernetes/kubectl any thoughts on this? |
It's not unreasonable, but we'd need pod security policy to control the user input and we'd probably have to disallow user by name (since we don't allow it for containers - you must specify UID). |
Legitimate use-case |
Any update on this? |
My app container image is built using buildpacks. I'd like to open a shell. When I do, I am root, and all the env vars are set. But the buildpack-generated environment is not there. If I open a login shell for the app user ( |
I thought On Tue, Oct 11, 2016 at 5:26 PM, Michael Elsdörfer <notifications@github.com
|
@miracle2k - Have you tried |
@adarshaj @smarterclayton Thanks for the tips. |
Here is an example how I need this functionality. The official Jenkins image runs as the user Jenkins. I have a persistent disk attached that I need to resize. If kubectl had the --user I could bash in as root and resize2fs. Unfortunately without it it is an extreme pain. |
An additional use case - you're being security conscious so all processes running inside the container are not privileged. But now something unexpectedly isn't working and you want to go in as root to e.g. install debug utilities and figure out what's wrong on the live system. |
Installing stuff for debugging purposes is my use case as well. Currently I |
What's the status on this? This functionality would be highly useful |
I didn't check, but does the |
No, those have to do with identifying yourself to the kubernetes API, not passing through to inform the chosen uid for the exec call |
The lack of the user flag is a hassle. Use case is I have a container that runs as an unprivileged user, I mount a volume on it, but the volume folder is not owned by the user. There is no option to mount the volume with specified permissions. I can't use an entrypoint script to change the permissions because that runs as the unprivileged user. I can't use a lifecycle.preStart hook because that runs as the unprivileged user too. I guess though this should be an additional RBAC permission, to allow/block 'exec' as other than the container user. Ideally the lifeCycle hooks should be able to run as root in the container, even when the container does not. Right now the best alternative is probably to run an init container against the same mount; kind of an overhead to start a separate container and mount volumes, when really I just need a one-line command as root at container start. |
/sig cli |
+1 for this feature. Not having this makes debugging things a lot more painful. |
+1 for this feature. I have to rebuild my docker container and make sure the Docker file has USER root as the last line, then debug, then disable this. docker command line seems to have a --user flag |
johnjjung, if you have ssh access to the node you can connect to the container using docker with the user flag which might save you a bit of time. |
Hmm, awesome let me try this
…On Jul 10, 2017, 11:34 -0400, BenAbineriBubble ***@***.***>, wrote:
johnjjung, if you have ssh access to the node you can connect to the container using docker with the user flag which might save you a bit of time.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
|
+1 really a issue, I have to ssh and then exec the docker exec, such annoying |
/cc @frobware |
Thanks for the thoughtful reply @whereisaaron :) I think that captures things quite well.
KEPs can be quite daunting, but I want to provide a little context around them. Kubernetes itself is very large; potential changes have a very large blast radius, both for the contributor base and users. A new feature might seem easy to impliment but has the potential to broadly impact both groups. We delegate stewardship of parts of the code base to SIGs; and it is through the KEPs that one or more of the SIGs can come to concensus on a feature. Depending on what the feature does, it may go through an API review, evaluated for scalability concerns etc. All this is to ensure that what is produced has the greatest chance of success and is developed in a way that the SIG(s) would be willing to support it. If the orginal author(s) step away, the responsibility of maintaining it falls to the SIG. If say, a feature was promoted to stable and then flagged for deprecation, it'd be a minium of a year before it could be removed following the deprecation policy. If there's enough demand for a feature, usually someone that's more familiar with the KEP process will offer to help get it going and shepherd it along, but it still needs someone to drive it. In any case, I hope that sheds at least a bit of light on why there is a process associated with getting a feature merged. 👍 If you have any questions, please feel free to reach out directly. |
For me inspecting the filesystem as root, and running utilities that can interact with filesystem as root, is the number one reason of wanting to get support for the requested feature. In short, this suggestion does not solve my problem at all. |
I was wrong about that, because your injected debug container shares the process namespace with your target container, you can access the filesystem of any process in the target container from your debug container. And that would include both the container filesystems and any filesystems mounted into those containers.
|
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
@whereisaaron It looks like most cloud providers do not support this, and for on prem we can just go to a node and Also access via |
I had a similar problem: I needed to create some directories, links and add permission for the non-root user on an official image deployed by an official helm chart (jenkins). I was able to solve it by using the exec-as plugin. |
With planned Docker deprecation and subsequent removal, when will be this addressed? Ephemeral containers are still in alpha. What is the stable alternative without using Docker as CRI? |
Besides being alpha, ephemeral containers is a lot more complicated to use than simply |
Another usecase for this is manually executing scripts in containers. For example, NextCloud's |
You can solve the problem with nextcloud by running
```
su -s /bin/bash www-data
```
But this is not ideal.
|
4 years have passed and this feature still not implemented. WOW! |
/close please see the last comment from Clayton here: #30656 (comment) |
When there is a KEP opened, please link it back here to let us follow it :) |
@dims I'm confused, why is this closed? It is not fixed, and it also stated at #30656 (comment) that this is not a case of "won't fix", so why has it been closed? |
@AndrewSav there is no one working on it and no one willing to work on it. So closing this to reflect reality as by default it is "won't fix". This has gone one for 4 years and don't want to continue giving the impression that this is on anyone's radar since it's not clearly. |
Anyone willing to push this forward would have to address the “security implications” Clayton mentions. This might make contributors reluctant, so what is meant with that? I would have thought that if I am allowed to |
HI. To solve this issue, I'm making a tool called "kpexec". |
btw, there is a kubectl plugin for that too. https://github.com/jordanwilson230/kubectl-plugins#kubectl-ssh |
Looks like this is still not resolved, after 6 years. And GKE moved away from docker, making it impossible to SSH to nodes and use
So what is the suggestion? Move away from GKE into AWS who still use Docker? Is it the only way? |
It looks like
docker exec
is being used as the backend forkubectl exec
.docker exec
has the--user
flag, which allows you to run a command as a particular user. This same functionality doesn't exist in Kubernetes.Our use case is that we spin up pods, and execute untrusted code in them. However, there are times when after creating the pod, we need to run programs that need root access (they need to access privileged ports, etc).
We don't want to run the untrusted code as root in the container, which prevents us from just escalating permissions for all programs.
I looked around for references to this problem, but only found this StackOverflow answer from last year -- http://stackoverflow.com/questions/33293265/execute-command-into-kubernetes-pod-as-other-user .
There are some workarounds to this, such as setting up a server in the container that takes commands in, or defaulting to root, but dropping to another user before running untrusted code. However, these workarounds break nice Kubernetes/Docker abstractions and introduce security holes.
The text was updated successfully, but these errors were encountered: