New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Force pods to re-pull an image without changing the image tag #33664
Comments
This is indeed important, and I think there are two cases:
|
@yissachar using |
@yujuhong Sometimes it's very useful to be able to do this. For instance, we run a testing cluster that should run a build from the latest commit on the master branch of our repository. There aren't tags or branches for every commit, so ':latest' is the logical and most practical name for it. Wouldn't it make more sense if Kubernetes stored and checked the hash of the deployed container instead of its (mutable) name anyway, though? |
@yujuhong I agree that if you can do so then you should (and I do!). But this question comes up quite frequently and often users cannot easily tag every build (this often arises with CI systems). They need a solution with less friction to their process, and this means they want to see some way of updating a Deployment without changing the image tag. |
I am running into the same limitations. I agree that in an ideal setup every version would be explicitly tagged, but this can be cumbersome in highly automated environments. Think of dozens of containers with 100 new versions per day. Also, when debugging and setting up a new infrastructure there are a lot of small tweaks made to the containers. Having a force-repull on a Deployment will make the process more frictionless. |
Hmm....I still think automatically tagging images by commit hash would be ideal, but I see that it may be difficult to do for some CI systems. In order to do this, we'd need (1) a component to detect the change and (2) a mechanism to restart the pod.
This sounds reasonable. /cc @pwittrock, who has more context on the CI systems. |
Creating a tag for every single commit is also pretty pointless - commits already have unique identifiers - especially when you only care about the last one. What I don't understand is why Kubernetes treats tags as if they're immutable, when they're explicitly mutable human-readable names for immutable identifiers (the hash of the manifest). |
What is the consensus on this? |
Longer term some CICD system should support this. Immediate term: It would probably be simple to create a controller that listens for changes to a container registry and then updates a label on all deployments with a specific annotation. You could install this controller into your kubernetes cluster using helm. I will try to hack a prototype together later this week. |
Quick question - why not set an annotation on the pod template with the current time to force the repull. I believe this would execute an update using the deployment's strategy to rollout the new im age. I put together an example of how to write a controller to do this in response to webhook callbacks from dockerhub. I need to add some documentation and then will post the example here. Ideally I would put together a helm chart for this as well. |
FYI here is a simple controller for pushing deployment updates in response to webhook callbacks. It natively supports dockerhub, but you can manually post to it from the command line. |
fwiw, I don't think we should support this as a proper Kubernetes API. I don't quite understand the use case, as I see it there are two scenarios:
Finally, I believe that @justinclayton or @yissachar is there a use case I'm missing here? |
I'm not sure I follow the argument here. Downtime isn't fine in our use-case, of running monitoring nodes of the latest instances of our software. It seems sensible to be able to apply the same deployment mechanics to this as to anything else. More broadly, docker tags, like most name services, are fundamentally mutable by design - and docker repos provide a way to resolve a tag to the current image hash. I don't understand why Kubernetes associates the mutable tag with a deployed pod, then treats it as immutable, instead of just using the immutable identifier in the first place.
Perhaps, but it still leaves the task of resolving tag name to image hash up to the user, something that's definitely nontrivial to do with existing tools. |
@brendandburns I'm interested in this as well. Not for the reasons of updating the pods. My situation is this: Pods and Containers are pretty stable but the data moves way faster. Our data sets span 100s of GBs per file with 100s of files (genomic data, life sciences). And since a lot of the software is academic there isn't much engineering effort going into it. Currently the easiest way to "redeploy" is to replace a config map that points to the new data sets. Kubernetes takes care of replacing the actual config file in the container but right now there's no way to trigger a a kind of rolling-update so that pods get killed and restarted the same way it would happen with an update to the actual container versions. I don't want to get into the business of image management too much so I try not to update images every time data changes. Does that makes sense? I'm happy to go any other path, but my current experience is that this seems to be the way to go when there's not enough development bandwidth to fix the underlying issues. |
#13488 seems related |
@serverhorror I think the way that I would accomplish what you want is that I would set up a side car container that is in the same pod as your main container. The job of that sidecar is to monitor the config file and send a signal (e.g. SIGHUP or SIGKILL) to your main container that indicates that the data file has changed. You could also use container health checks e.g. set up a health check for your 'main' container to point to a web service hosted by your sidecar. Whenever the sidecar changes, the health check goes 'unhealthy' and the kubelet will automatically restart your main container. @Arachnid I guess I fundamentally believe that tags should not be used in a mutable matter. If you use image tags in a mutable way, then the definition stops having meaning, you no longer can know for sure what is running in a particular container just by looking at the API object. Docker may allow you to mutate tags on images, but I think that the Kubernetes philosophy (and hard-won experience of running containerized systems at scale) is that mutable tags (and 'latest' in particular) are very dangerous to use in a production environment. I agree that the right thing to do is to apply the same deployment mechanics in test and in prod, given that, and the belief that Here are some examples of concrete production issues that I ran into due to the use of latest:
I hope that helps explain why I think that mutable labels are a dangerous idea. |
Agreed, as-is they're dangerous, but this could be trivially resolved by having the API object retain the hash of the container image as the permanent identifier for it, rather than assuming the (mutable) tag won't change. This seems like a fundamental mismatch between how Docker treats tags and how Kubernetes treats them, but it seems resolvable, to me. Every one of the problems you list below could be resolved by storing and explicitly displaying the hash of the currently running container. Tagging images by their git hashes doesn't really express what I mean when I create a deployment, and introduces awkward dependencies requiring me to propagate those tags through the system. |
@brendandburns Right, liveness checks seem to be another easy way. That is serving my needs, could have thought of that. Consider my argument for this taken back :) |
@brendandburns and @yujuhong: I could see this being useful in a number of use cases, where "latest" is used in prod.
Depends on how "latest" gets used. I have worked with a number of environments where there is a single image registry that supports prod/testing/etc. (which makes sense). However, the given repos can be populated only by CI. Builds off of any branch get tagged correctly with versions, but builds off HEAD from master (which pass all tests of course) also get tagged "latest". Prod environments, in turn, point at "latest". That way I don't need to update anything about versions for prod; l just need to say, "go rolling update" (either automatically or when a human approves, which hopefully will be removed from the process very soon). To answer the "danger" question:
So:
I guess I could whip up a script or Web app that lists all available tags that come from "master" and makes them pick one, and when we go full automated, have the CI also pull the deployment, update the image, and redeploy? |
if |
We ended up building a simple python script that builds our yaml files. Here is an example of that.
|
Guys, Kubernetes 1.15 will ship with a |
Wow. A 4 year dust-up.
I like that this can also fix unbalanced clusters (assuming it reschedules), and pick up edited config maps (until there is support for versioning those).
But I still want it for statefulSets. Maybe Deployments need the option of controlling statefulSet, not just replicaSet.
Darrin (mobile)
… On May 30, 2019, at 1:01 AM, Guillaume Gelin ***@***.***> wrote:
Guys, Kubernetes 1.15 will ship with a kubectl rollout restart command. See #13488.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
|
|
i just ended up on this issue from following some blog posts, etc. one solution as mentioned above was to...
for folks who want to do it in an automated way, we ended writing some time ago a little tool, kbld (https://get-kbld.io), that transform image references to their digest equivalents. even though we did this to lock down which image is being used, it would also solve this problem as well in a more automated manner. |
When hpa enabled, kubectl rollout restart creates max number of pods |
Workaround for this is to implementing SHA digest , which is really working for me |
Implement SHA digest on what?
…-Sent on Samsung 20G Ultra mobile
Jeryl Cook
Founder & Chief Executive Officer
VanitySoft, Inc.
A Geo Business Intelligence Technology Consulting Firm
www.vanity-soft.com
www.linkedin.com/in/jerylcook
Get answers to "who knew what, when, and where"... and everything in
between.
On Sun, Apr 12, 2020, 1:31 PM Adedapo Ajuwon ***@***.***> wrote:
Workaround for this is to implementing SHA digest , which is really
working for me
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#33664 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AADTS5UOMOT5G6F7JWYXQNDRMH3HLANCNFSM4CREDFAA>
.
|
@VanitySoft, @dapseen is saying to pull by Docker image SHA digest instead of by tag names. This would be a change in your CI/CD workflow. You'd have to add something like this (assuming you're using Docker Hub): docker_token=$(curl -s -u "${DOCKER_USERNAME}:${DOCKER_PASSWORD}" -H "Accept: application/vnd.docker.distribution.manifest.v2+json" "https://auth.docker.io/token?service=registry.docker.io&scope=repository:${DOCKER_REPO}:pull&account=${DOCKER_USERNAME}" | jq -r '.token')
docker_digest=$(curl -s -I -H "Authorization: Bearer ${docker_token}" -H "Accept: application/vnd.docker.distribution.manifest.v2+json" "https://index.docker.io/v2/${DOCKER_REPO}/manifests/${DOCKER_TAG}" | grep 'Docker-Content-Digest' | cut -d' ' -f2 | tr -d '\r')
unset docker_token Then the image is referenced as This is the only way to achieve true idempotent deployments since digests don't change. |
Hi, Any help would be really appreciated. Thanks |
The spec:
template:
metadata:
annotations:
kubectl.kubernetes.io/restartedAt: "2020-05-25T19:13:21+02:00" In order to replicate that in helm you can use the same pattern. We use a value flag to activate a force restart when needed by adding to the deployment the following line: spec:
template:
metadata:
annotations:
{{if .Values.forceRestart }}helm.sh/deploy-date: "{{ .Release.Time.Seconds }}"{{end}} if |
Hey dkapanidis, |
We use this on CI/CD and depending on the branch the CICD builds if it is the "develop" branch (or "master" depending on your git flow) that translates to "latest" tags in docker registry then the CICD activates the flag (so that it is only used when the image is overwritten, not during tag releases). I'm assuming here that the CI and CD are triggered together and everytime an image is build, the deployment is also done, which means you always need to redeploy on those cases. As long as the downtime, there should be none as the rollout of the deployment takes care of that. |
Hi @Arjunkrisha , I built a tool for this: https://github.com/philpep/imago where you can just use invoke with |
Hi philpep,
Thanks! |
I like this idea as well. Will check out and let you know which works best |
Hi. I know this case is closed, but here is my solution and why it works. In the Kubernetes documentation: "This means that the new revision is created if and only if the Deployment's Pod template (.spec.template) is changed, for example if you update the labels or container images of the template." [Kubernetes](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-back-a-deployment:~:text=This%20means%20that%20the%20new%20revision,or%20container%20images%20of % 20the% 20template) I have Gitlab to send to Kubernetes, I have access to CI_COMMIT_SHA variable to differentiate between commit and commit. deploy.yaml
Using the "apply" command it will perform the update because the annotation has changed. kubectl apply -f deploy.yaml |
lesson-13.md kubernetes/kubernetes#33664 3 solutions
This is my favourite GitHub issue. |
It's been ages since I upvoted this, but I think what I ended up doing is force a rollout |
Maybe using Kyverno policy to archive it. |
Just realized that this issue was created by @yissachar on Sept 2016 😱 URunner source and doc --> https://github.com/texano00/urunner |
Problem
A frequent question that comes up on Slack and Stack Overflow is how to trigger an update to a Deployment/RS/RC when the image tag hasn't changed but the underlying image has.
Consider:
foo:latest
foo:latest
foo:latest
to their registryThe problem is that there is no existing Kubernetes mechanism which properly covers this.
Current Workarounds
localhost:5000/andy/busybox@sha256:2aac5e7514fbc77125bd315abe9e7b0257db05fe498af01a58e239ebaccf82a8
latest
tag orimagePullPolicy: Always
and delete the pods. New pods will pull the new image. This approach doesn't do a rolling update and will result in downtime.Possible Solutions
cc @justinsb
The text was updated successfully, but these errors were encountered: