New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initial check for container being ready in a pod #82482
Comments
Actually I don't know if it is really related to these SIGs but @kubernetes/sig-scheduling-feature-requests @kubernetes/sig-autoscaling-feature-requests |
@alitoufighi: Reiterating the mentions to trigger a notification: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This is not really related to sig scheduling. /sig node |
/sig scheduling remove |
@k82cn: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/remove-sig scheduling |
I am also looking for something like this. Any solution yet? |
@ashish-oyo As it seems to be implemented, I close this issue. |
I'm reopening again as it seems that |
@alinbalutoiu actually both |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What would you like to be added:
I had the issue mentioned here: #37450
My applications take some time to be ready for serving traffic after creation, and I thought
readinessProbe
must be the solution to keep themNotReady
until they're actually ready. But I only need this in the beginning of the lifecycle of my container. Using readinessProbe caused me experiencing a heavy CPU load which is not proper at all.If there was something like
initialReadinessDelay
that I could sayWait for 2 minutes before bringing down old pods and replacing new ones
, or using ainitialReadinessProbe
to check for availability of my containers, or adding some features to existingreadinessProbe
to set for this purpose, I would be happy experiencing a real zero downtime update.Why is this needed:
Aside
readinessProbe
andlivenessProbe
that monitor healthiness of Pods, sometimes your containers are not such complicated. They only at their startup, need some time before they're really up and ready, and then they do their job. Then if something wrong happened, they simply crash and get restarted by their Deployment. Limiting running areadinessProbe
only once prevents excess usage of resources.Note: The
readinessProbe
I was using was this simple one:Because I only needed 60 seconds before Kubelet checks my container as
Ready
.The text was updated successfully, but these errors were encountered: