New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Database connection limits reached when GC is run #1708
Comments
Why do you run 80 core pods? Are piping the s3 traffic via core. (Disable redirect on docker distribution?) One can do quite some optimization with indexes and caches. However this won't solve the GC issue. It's a fundamental problem inherited by harbor from docker distribution. |
maybe you need to update the db connection to a larger number, reference: |
@Vad1mo {{- if .Values.s3 }}
s3:
{{- if and (ne .Values.environment "internal") (ne .Values.environment "internal-test") }}
existingSecret: {{ .Values.targetNamespace }}-secret
{{- end }}
region: {{ .Values.s3.region }}
bucket: {{ .Values.s3.bucket }}
accesskey: managed-by-sealed-secret
secretkey: managed-by-sealed-secret
regionendpoint: {{ .Values.s3.regionendpoint }}
encrypt: ""
keyid: ""
secure: ""
skipverify: true
v4auth: ""
chunksize: "5242880"
rootdirectory: ""
storageclass: STANDARD
multipartcopychunksize: "33554432"
multipartcopymaxconcurrency: 100
multipartcopythresholdsize: "33554432"
{{- end }} |
Hi,
We have harbor-helm deployed on a K8s cluster with RDS and S3 as the data backend. We currently have begun seeing an issue where GC is run and it takes up all of the available connections on the RDS cluster. This results in being unable to interact with harbor via UI, API, or OCI clients. The connections are eventually freed after ~5 hours, but during that time harbor is inoperable.
Please let me know if this is a better issue for the main harbor repo.
We have paused the GC schedule for the time being.
Harbor helm chart version: 1.11.1
Harbor version: v2.7.1-6015b3ef
DB Connection Values:
At the time the connections were overwhelmed, we had ~80 core + exporter pods running. By my calculations, that only equates to ~1100 connections which is no where near the 5k that we saw at the time. Any thoughts on this?
Pics
In the pictures below, you can see that the connections to the DB are more than what they should be according to harbor docs that max connections = [maxOpenConns] * (core+exporter). The spike in pod count around 2130 is the result of my interventions and is well after we hit max connections on the db.
The text was updated successfully, but these errors were encountered: