New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Harbor core and jobservice health checks timeouts #986
Comments
Hi, in most of env the local network timeout will not exceed 1 second, I don't think we shouldn't edit this config item. |
Most of requests in our environment also does not exceed 1 second, but from time to time it does. And when it happens 2 times in a row, then our service is restarted and this can occur multiple times per day. We don't need to change default timeout. We could make it configurable instead as in example above. |
This issue is being marked stale due to a period of inactivity. If this issue is still relevant, please comment or remove the stale label. Otherwise, this issue will close in 30 days. |
Issue still exists |
This issue is being marked stale due to a period of inactivity. If this issue is still relevant, please comment or remove the stale label. Otherwise, this issue will close in 30 days. |
Issue still exists |
We are using Harbor Helm Chart 1.6.2 (which contains Harbor v2.2.2) on Kubernetes 1.19.
Core and jobservice pods are restarting from time to time because of the timeouts on readiness and liveness probes:
By default, timeout for health check if 1 second. When I run command below inside containers, sometimes it took a few seconds to respond (in most cases it respond quickly).
curl localhost:8080/api/v2.0/ping
I don't know what causes random long response time for health checks. But because of the Harbor probe settings (failureThreshold of 2 for core and default 3 for job service), these pods are restarted frequently.
In my opinion, Harbor should either configure bigger timeout for these services our expose configuration via values.yaml for these probes.
In helm chart it can be done easily, when in values.yaml we place such section:
And then in our deployment definition we can put (example copied from my Helm chart, didn't tested for yml validity):
Then any chart user can specify any options he/she wants.
The text was updated successfully, but these errors were encountered: