-
-
Notifications
You must be signed in to change notification settings - Fork 197
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature: Ready and Liveness endpoints #1304
Comments
I'm not familiar with kubernetes but according to the docs you could use http endpoints. Couldn't you simply use the query endpoint for it? 🤔 |
I'm not necessarily against adding the set of k8s HTTP endpoints, but you can use a command to check this already. |
@ThinkChaos Commands aren't great for this purpose. I understand they are supported, but endpoints are much better.
@kwitsch Send a GET request to |
Why are endpoints better? The command uses blocky which is already available in the container. |
Ok since the body is parsed and I'm currently not sure how to set it this may not be the preferred method. I would prefer the method @ThinkChaos suggested:
|
I went and added this, but a http endpoint is still preferred. I can work on this feature if ya'll support it? |
Can you please explain why? As for whether we want it, I'll defer to @0xERR0R. |
I would rather have kubelet talk to it's API endpoint then executing some arbitrary command inside a container. |
An API endpoint for external monitoring without actual binary execution permissions could actually be useful. I'm in favor of adding a |
We provide currently a docker image with predefined healthcheck command. This command performs a plain DNS query (port 53) with special query. In the default configuration, all http endpoints are disabled. We can provide a new REST endpoint for healthcheck. This endpoint could also return some detailed information (for example the connection status to redis or postgres), maybe this project could be interesting. But it will only work, if HTTP port is defined in blocky's configuration. This endpoint could be used for the liveness check. Readiness check is used in k8s to check if http traffic should be routed to pod. For blocky, this covers only the DoH. If someone use plain DNS or DoT -> imho it does not make sense to define the readiness probe. btw, did you try to use the TCP liveness probe with port 53? |
Just stumbled on this issue and coincidentally just created a deployment yesterday that is making use of port 53 for the readiness probe:
Works fine so far, but I'm also supporting a the idea of a proper healthcheck implementation (also regarding the redis status). |
@kamelohr thanks for posting an alternative working config.
FWIW my question was to know if there's a technical reason why HTTP endpoints are better or if it's just about ease of setup and following k8s conventions. I agree that having the standard k8s endpoints seems worth it. |
To make this clear: There is no preference for Kubernetes which for any kind of probe type. Commands are fine as well as http endpoints. Port are a bit unfavoured due to the case that the port might be open, but the app is broken. But at the end of the day, it's up for the application and use-case to decide what kind of probes make sense. It's way more important to get readiness vs liveness clear than the medium used to achieve that. Readiness describes the ability to the instance to handle traffic, vs liveness that is supposed to prevent a deadlock situation. (Some background information for those interested: https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-setting-up-health-checks-with-readiness-and-liveness-probes) And here comes in some personal opinion: Having a binary run a DNS query against the DNS server is a way better way to check its health than running a http query against an HTTP endpoint. (Of course you can make the HTTP endpoint trigger the DNS client, but does introducing more moving parts really make things easier?) But if one really wants to do healthchecks using an HTTP endpoint, maybe using the DoH API would be an idea ^^ |
Thanks all of you, I developed config below: apiVersion: apps/v1
kind: Deployment
metadata:
name: blocky
spec:
template:
spec:
containers:
- name: blocky
image: ghcr.io/0xerr0r/blocky:v0.23
ports:
- name: dns-tcp
containerPort: 1053
- name: dns-udp
containerPort: 1053
protocol: UDP
- name: http
containerPort: 4000
startupProbe:
exec:
command:
- /app/blocky
- healthcheck
- --port
- "1053"
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
failureThreshold: 30
livenessProbe:
exec:
command:
- /app/blocky
- healthcheck
- --port
- "1053"
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
failureThreshold: 6
readinessProbe:
tcpSocket:
port: dns-tcp
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
failureThreshold: 6 |
Hey there @0xERR0R,
Would be really nice if we had
liveness
andreadiness
endpoints for us in K8s and other environments.See here: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
This way, if blocky is configured as our networks DNS, it doesn't start accepting traffic until those endpoints are OK.
The text was updated successfully, but these errors were encountered: