Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support the user flag from docker exec in kubectl exec #30656

Closed
VikParuchuri opened this issue Aug 15, 2016 · 105 comments
Closed

Support the user flag from docker exec in kubectl exec #30656

VikParuchuri opened this issue Aug 15, 2016 · 105 comments
Labels
area/kubectl sig/cli Categorizes an issue or PR as relevant to SIG CLI. sig/node Categorizes an issue or PR as relevant to SIG Node.

Comments

@VikParuchuri
Copy link

It looks like docker exec is being used as the backend for kubectl exec. docker exec has the --user flag, which allows you to run a command as a particular user. This same functionality doesn't exist in Kubernetes.

Our use case is that we spin up pods, and execute untrusted code in them. However, there are times when after creating the pod, we need to run programs that need root access (they need to access privileged ports, etc).

We don't want to run the untrusted code as root in the container, which prevents us from just escalating permissions for all programs.

I looked around for references to this problem, but only found this StackOverflow answer from last year -- http://stackoverflow.com/questions/33293265/execute-command-into-kubernetes-pod-as-other-user .

There are some workarounds to this, such as setting up a server in the container that takes commands in, or defaulting to root, but dropping to another user before running untrusted code. However, these workarounds break nice Kubernetes/Docker abstractions and introduce security holes.

@adohe-zz
Copy link

SGTM. @kubernetes/kubectl any thoughts on this?

@smarterclayton
Copy link
Contributor

It's not unreasonable, but we'd need pod security policy to control the user input and we'd probably have to disallow user by name (since we don't allow it for containers - you must specify UID).

@smarterclayton
Copy link
Contributor

@sttts and @ncdc re exec

@sttts
Copy link
Contributor

sttts commented Aug 17, 2016

Legitimate use-case

@killdash9
Copy link

Any update on this?

@miracle2k
Copy link

My app container image is built using buildpacks. I'd like to open a shell. When I do, I am root, and all the env vars are set. But the buildpack-generated environment is not there. If I open a login shell for the app user (su -l u22055) I have my app environment, but now the kubernetes env vars are missing.

@smarterclayton
Copy link
Contributor

I thought su -l didn't copy env vars? You have to explicitly do the copy
yourself or use a different command.

On Tue, Oct 11, 2016 at 5:26 PM, Michael Elsdörfer <notifications@github.com

wrote:

My app container image is built using buildpacks. I'd like to open a
shell. When I do, I am root, and all the env vars are set. But the
buildpack-generated environment is not there. If I open a login shell for
the app user (su -l u22055) I have my app environment, but now the
kubernetes env vars are missing.


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub
#30656 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABG_p7sIu20xnja2HsbPUUgD1m4gXqVAks5qzCksgaJpZM4Jk3n0
.

@adarshaj
Copy link

@miracle2k - Have you tried su -m -l u22055? -m is supposed to preserve environment variables.

@miracle2k
Copy link

@adarshaj @smarterclayton Thanks for the tips. su -m has it's own issues (the home dir is wrong), but I did make it work in the meantime. The point though is - that's why I posted it here - is that I'd like to see "kubectl exec" do the right thing. Maybe even use the user that the docker file defines.

@jsindy
Copy link

jsindy commented Nov 30, 2016

Here is an example how I need this functionality.

The official Jenkins image runs as the user Jenkins. I have a persistent disk attached that I need to resize. If kubectl had the --user I could bash in as root and resize2fs. Unfortunately without it it is an extreme pain.

@chrishiestand
Copy link
Contributor

An additional use case - you're being security conscious so all processes running inside the container are not privileged. But now something unexpectedly isn't working and you want to go in as root to e.g. install debug utilities and figure out what's wrong on the live system.

@SimenB
Copy link
Contributor

SimenB commented Jan 12, 2017

Installing stuff for debugging purposes is my use case as well. Currently I ssh into the nodes running kubernetes, and use docker exec directly.

@gaballard
Copy link

What's the status on this? This functionality would be highly useful

@fabianofranz
Copy link
Contributor

I didn't check, but does the --as and --as-group global flags help here? Do they even work with exec? cc @liggitt

@liggitt
Copy link
Member

liggitt commented May 26, 2017

I didn't check, but does the --as and --as-group global flags help here? Do they even work with exec? cc @liggitt

No, those have to do with identifying yourself to the kubernetes API, not passing through to inform the chosen uid for the exec call

@k8s-github-robot k8s-github-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label May 31, 2017
@whereisaaron
Copy link

whereisaaron commented Jun 6, 2017

The lack of the user flag is a hassle. Use case is I have a container that runs as an unprivileged user, I mount a volume on it, but the volume folder is not owned by the user. There is no option to mount the volume with specified permissions. I can't use an entrypoint script to change the permissions because that runs as the unprivileged user. I can't use a lifecycle.preStart hook because that runs as the unprivileged user too. kubectl exec -u root could do that, if the '-u' option existed.

I guess though this should be an additional RBAC permission, to allow/block 'exec' as other than the container user.

Ideally the lifeCycle hooks should be able to run as root in the container, even when the container does not. Right now the best alternative is probably to run an init container against the same mount; kind of an overhead to start a separate container and mount volumes, when really I just need a one-line command as root at container start.

@xiangpengzhao
Copy link
Contributor

/sig cli

@k8s-ci-robot k8s-ci-robot added the sig/cli Categorizes an issue or PR as relevant to SIG CLI. label Jun 23, 2017
@k8s-github-robot k8s-github-robot removed the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Jun 23, 2017
@skorski
Copy link

skorski commented Jun 27, 2017

+1 for this feature. Not having this makes debugging things a lot more painful.

@johnjjung
Copy link

+1 for this feature. I have to rebuild my docker container and make sure the Docker file has USER root as the last line, then debug, then disable this.

docker command line seems to have a --user flag

@BenAbineriBubble
Copy link

johnjjung, if you have ssh access to the node you can connect to the container using docker with the user flag which might save you a bit of time.

@johnjjung
Copy link

johnjjung commented Jul 10, 2017 via email

@jiaj12
Copy link

jiaj12 commented Aug 31, 2017

+1 really a issue, I have to ssh and then exec the docker exec, such annoying

@sttts
Copy link
Contributor

sttts commented Aug 31, 2017

/cc @frobware

@mrbobbytables
Copy link
Member

Thanks for the thoughtful reply @whereisaaron :) I think that captures things quite well.

I figured I'd see how much work it is to write one and... yeah I'm not the person to write this, The template lost me at checklist item one Pick a hosting SIG. anyone more familiar with the process want to start the draft? I just want a place to stick my 👍 in support of the proposal as an active Kubernetes user.

KEPs can be quite daunting, but I want to provide a little context around them. Kubernetes itself is very large; potential changes have a very large blast radius, both for the contributor base and users. A new feature might seem easy to impliment but has the potential to broadly impact both groups.

We delegate stewardship of parts of the code base to SIGs; and it is through the KEPs that one or more of the SIGs can come to concensus on a feature. Depending on what the feature does, it may go through an API review, evaluated for scalability concerns etc.

All this is to ensure that what is produced has the greatest chance of success and is developed in a way that the SIG(s) would be willing to support it. If the orginal author(s) step away, the responsibility of maintaining it falls to the SIG. If say, a feature was promoted to stable and then flagged for deprecation, it'd be a minium of a year before it could be removed following the deprecation policy.

If there's enough demand for a feature, usually someone that's more familiar with the KEP process will offer to help get it going and shepherd it along, but it still needs someone to drive it.

In any case, I hope that sheds at least a bit of light on why there is a process associated with getting a feature merged. 👍 If you have any questions, please feel free to reach out directly.

@AndrewSav
Copy link

The disadvantage is I don't think you can inspect the filesystem of the target, unless you can share an external mount or 'empty' mount.

For me inspecting the filesystem as root, and running utilities that can interact with filesystem as root, is the number one reason of wanting to get support for the requested feature. In short, this suggestion does not solve my problem at all.

@whereisaaron
Copy link

whereisaaron commented May 10, 2020

The disadvantage is I don't think you can inspect the filesystem of the target

I was wrong about that, because your injected debug container shares the process namespace with your target container, you can access the filesystem of any process in the target container from your debug container. And that would include both the container filesystems and any filesystems mounted into those containers.

Container filesystems are visible to other containers in the pod through the /proc/$pid/root link.

https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/#understanding-process-namespace-sharing

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 8, 2020
@zerkms
Copy link
Contributor

zerkms commented Aug 8, 2020

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 8, 2020
@AndrewSav
Copy link

kubectl alpha debug -it ephemeral-demo --image=busybox --target=ephemeral-demo

error: ephemeral containers are disabled for this cluster

@whereisaaron It looks like most cloud providers do not support this, and for on prem we can just go to a node and docker exec into the container. So again, the usefulness seems quite limited.

Also access via /proc/$pid/root is not what I'd like, I would like a direct access not via "side window". For example running utils like apt/apk in the continer is not easy when the root filesystem is not where they expect it.

@bmaehr
Copy link

bmaehr commented Nov 8, 2020

I had a similar problem: I needed to create some directories, links and add permission for the non-root user on an official image deployed by an official helm chart (jenkins).

I was able to solve it by using the exec-as plugin.

@rcny
Copy link

rcny commented Dec 3, 2020

With planned Docker deprecation and subsequent removal, when will be this addressed? Ephemeral containers are still in alpha. What is the stable alternative without using Docker as CRI?

@gjcarneiro
Copy link

Besides being alpha, ephemeral containers is a lot more complicated to use than simply kubectl exec --user would be.

@xplodwild
Copy link

Another usecase for this is manually executing scripts in containers. For example, NextCloud's occ maintenance script requires to be ran as www-data. There is no sudo or similar in the image, and the doc advise to use docker exec -u 33 when in a Docker environment.

@morremeyer
Copy link

morremeyer commented Dec 4, 2020 via email

@karimhm
Copy link

karimhm commented Jan 25, 2021

4 years have passed and this feature still not implemented. WOW!

@dims
Copy link
Member

dims commented Feb 1, 2021

/close

please see the last comment from Clayton here: #30656 (comment)

@dims dims closed this as completed Feb 1, 2021
@immanuelfodor
Copy link

When there is a KEP opened, please link it back here to let us follow it :)

@AndrewSav
Copy link

@dims I'm confused, why is this closed? It is not fixed, and it also stated at #30656 (comment) that this is not a case of "won't fix", so why has it been closed?

@dims
Copy link
Member

dims commented Feb 2, 2021

@AndrewSav there is no one working on it and no one willing to work on it. So closing this to reflect reality as by default it is "won't fix". This has gone one for 4 years and don't want to continue giving the impression that this is on anyone's radar since it's not clearly.

@bronger
Copy link

bronger commented Feb 2, 2021

Anyone willing to push this forward would have to address the “security implications” Clayton mentions. This might make contributors reluctant, so what is meant with that?

I would have thought that if I am allowed to kubectl exec to a pod, I am the full-fledged master of that pod anyway.

@ssup2
Copy link

ssup2 commented Feb 18, 2021

HI. To solve this issue, I'm making a tool called "kpexec".
Please try this and give me feedback. Thanks.

https://github.com/ssup2/kpexec

@kam1kaze
Copy link

btw, there is a kubectl plugin for that too.

https://github.com/jordanwilson230/kubectl-plugins#kubectl-ssh

@relgames
Copy link

Looks like this is still not resolved, after 6 years. And GKE moved away from docker, making it impossible to SSH to nodes and use docker exec -u, as crictl does not have a way to pass user either.

kubectl debug does not work as well, as it just ends up with the same user as the main container, with no way to become root.

So what is the suggestion? Move away from GKE into AWS who still use Docker? Is it the only way?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kubectl sig/cli Categorizes an issue or PR as relevant to SIG CLI. sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
None yet