Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kong/charts not running #983

Open
NICK-DUAN opened this issue Jan 12, 2024 · 16 comments
Open

kong/charts not running #983

NICK-DUAN opened this issue Jan 12, 2024 · 16 comments

Comments

@NICK-DUAN
Copy link

when i upgrade kong/charts(from 1.7.0 to 2.30.0), kong and ingress-controller not running.

my values.yaml

env:
  client_body_buffer_size: 10m
  client_header_buffer_size: 128k
  nginx_http_client_body_buffer_size: 10m
  nginx_http_client_header_buffer_size: 128k
  nginx_http_large_client_header_buffers: 4 128k
admin:
  enabled: true
  type: ClusterIP
  http:
    enabled: true
  tls:
    enabled: false
proxy:
  type: ClusterIP
migrations:
  preUpgrade: false
  postUpgrade: false
ingressController:
  admissionWebhook:
    ebabled: false
  resources:
    limits:
      cpu: 100m
      memory: 600Mi
    requests:
      cpu: 100m
      memory: 300Mi
resources:
  limits:
    cpu: 500m
    memory: 1024Mi
  requests:
    cpu: 200m
    memory: 512Mi
replicaCount: 2
autoscaling:
  enabled: true
  maxReplicas: 5
  metrics:
    - resource:
        name: cpu
        target:
          averageUtilization: 80
          type: Utilization
      type: Resource
    - resource:
        name: memory
        target:
          averageUtilization: 80
          type: Utilization
      type: Resource
  minReplicas: 2
serviceMonitor:
  enabled: true
affinity:
  podAntiAffinity:
    preferredDuringSchedulingIgnoredDuringExecution:
      - podAffinityTerm:
          labelSelector:
            matchLabels:
              app.kubernetes.io/instance: appshipyard
          topologyKey: kubernetes.io/hostname
        weight: 100
manager:
  enabled: true
  type: ClusterIP
portal:
  enabled: true
  type: ClusterIP
portalapi:
  enabled: true
  type: ClusterIP

ingress-controller log:

time="2024-01-12T10:51:12Z" level=info msg="Retrying kong admin api client call after error" error="making HTTP request: Get \"http://localhost:8001/\": dial tcp [::1]:8001: connect: connection refused" logger=setup retries=55/60
time="2024-01-12T10:51:13Z" level=info msg="Retrying kong admin api client call after error" error="making HTTP request: Get \"http://localhost:8001/\": dial tcp [::1]:8001: connect: connection refused" logger=setup retries=56/60
time="2024-01-12T10:51:14Z" level=info msg="Retrying kong admin api client call after error" error="making HTTP request: Get \"http://localhost:8001/\": dial tcp [::1]:8001: connect: connection refused" logger=setup retries=57/60
time="2024-01-12T10:51:15Z" level=info msg="Retrying kong admin api client call after error" error="making HTTP request: Get \"http://localhost:8001/\": dial tcp [::1]:8001: connect: connection refused" logger=setup retries=58/60
time="2024-01-12T10:51:16Z" level=info msg="Retrying kong admin api client call after error" error="making HTTP request: Get \"http://localhost:8001/\": dial tcp [::1]:8001: connect: connection refused" logger=setup retries=59/60
Error: could not retrieve Kong admin root(s): making HTTP request: Get "http://localhost:8001/": dial tcp [::1]:8001: connect: connection refused
time="2024-01-12T10:51:41Z" level=info msg="diagnostics server disabled"
time="2024-01-12T10:51:41Z" level=info msg="starting controller manager" commit=c29db3e1acf74059d8efba0dd8b7c7913a5518ba logger=setup release=2.12.2 repo="https://github.com/Kong/kubernetes-ingress-controller.git"
time="2024-01-12T10:51:41Z" level=info msg="the ingress class name has been set" logger=setup value=appshipyard
time="2024-01-12T10:51:41Z" level=info msg="getting enabled options and features" logger=setup
time="2024-01-12T10:51:41Z" level=info msg="getting the kubernetes client configuration" logger=setup
W0112 10:51:41.325593       1 client_config.go:618] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
time="2024-01-12T10:51:41Z" level=info msg="starting standalone health check server" logger=setup
time="2024-01-12T10:51:41Z" level=info msg="getting the kong admin api client configuration" logger=setup
time="2024-01-12T10:51:41Z" level=info msg="Retrying kong admin api client call after error" error="making HTTP request: Get \"http://localhost:8001/\": dial tcp [::1]:8001: connect: connection refused" logger=setup retries=0/60
time="2024-01-12T10:51:42Z" level=info msg="Retrying kong admin api client call after error" error="making HTTP request: Get \"http://localhost:8001/\": dial tcp [::1]:8001: connect: connection refused" logger=setup retries=1/60

proxy log:

Error: 

  Run with --v (verbose) or --vv (debug) for more details

My Kong was upgraded from a very old version, and I don’t know what changes happened in the meantime. anybody help?

@rainest
Copy link
Contributor

rainest commented Jan 15, 2024

You can add the suggested argument to the proxy container. We don't expose args in values.yaml (most everything is controlled through environment variables instead, but the verbose flags aren't), so you'd need to run kubectl edit deploy DEPLOYMENT_NAME, find the proxy container (search for containers:), and add

args:
- --vv

Given such a large version jump, you may want to consider simply starting from a fresh install and adding settings you need on top of that. If your routing configuration is all handled through Ingresses/Services/etc. you don't strictly need to upgrade the actual Deployment, since the controller will restore it.

There are changes to Kubernetes and chart configuration that you'll need to perform regardless, however.

https://github.com/Kong/charts/blob/main/charts/kong/UPGRADE.md covers chart values.yaml changes.

https://docs.konghq.com/kubernetes-ingress-controller/2.11.x/guides/upgrade/ covers steps to upgrade from KIC 1.x to 2.x. https://docs.konghq.com/kubernetes-ingress-controller/3.0.x/guides/migrate/kongingress/ and the other guides in that section cover going from KIC 2.x to 3.x

https://docs.konghq.com/kubernetes-ingress-controller/2.11.x/guides/upgrade-kong-3x/ covers going from Kong 2.x to 3.x

@NICK-DUAN
Copy link
Author

You can add the suggested argument to the proxy container. We don't expose args in values.yaml (most everything is controlled through environment variables instead, but the verbose flags aren't), so you'd need to run kubectl edit deploy DEPLOYMENT_NAME, find the proxy container (search for containers:), and add

args:
- --vv

Given such a large version jump, you may want to consider simply starting from a fresh install and adding settings you need on top of that. If your routing configuration is all handled through Ingresses/Services/etc. you don't strictly need to upgrade the actual Deployment, since the controller will restore it.

There are changes to Kubernetes and chart configuration that you'll need to perform regardless, however.

https://github.com/Kong/charts/blob/main/charts/kong/UPGRADE.md covers chart values.yaml changes.

https://docs.konghq.com/kubernetes-ingress-controller/2.11.x/guides/upgrade/ covers steps to upgrade from KIC 1.x to 2.x. https://docs.konghq.com/kubernetes-ingress-controller/3.0.x/guides/migrate/kongingress/ and the other guides in that section cover going from KIC 2.x to 3.x

https://docs.konghq.com/kubernetes-ingress-controller/2.11.x/guides/upgrade-kong-3x/ covers going from Kong 2.x to 3.x

after add args, run helm -n kong install kong kong/kong -f ./charts/config.yaml --debug
i got this in proxy container(args=["kong", "docker-start", "--vv"]):

Error: 
/usr/local/share/lua/5.1/kong/cmd/utils/inject_confs.lua:28: 
stack traceback:
	[C]: in function 'assert'
	/usr/local/share/lua/5.1/kong/cmd/utils/inject_confs.lua:28: in function 'load_conf'
	/usr/local/share/lua/5.1/kong/cmd/utils/inject_confs.lua:85: in function </usr/local/share/lua/5.1/kong/cmd/utils/inject_confs.lua:84>
	[C]: in function 'xpcall'
	/usr/local/bin/kong:99: in function 'file_gen'
	init_worker_by_lua(nginx.conf:189):51: in function <init_worker_by_lua(nginx.conf:189):49>
	[C]: in function 'xpcall'
	init_worker_by_lua(nginx.conf:189):58: in function <init_worker_by_lua(nginx.conf:189):56>

And this time i use this custom-values:

env:
  admin_listen: "127.0.0.1:8001"
  log_level: debug
admin:
  enabled: true
  http:
    enabled: true
  tls:
    enabled: false

And I create a new namespace to deploy kong, i think there are no distractions this time

@NICK-DUAN
Copy link
Author

NICK-DUAN commented Jan 19, 2024

this is my debug information:
Release "kong" has been upgraded. Happy Helming!
NAME: kong
LAST DEPLOYED: Fri Jan 19 15:04:03 2024
NAMESPACE: kong
STATUS: deployed
REVISION: 7
TEST SUITE: None
USER-SUPPLIED VALUES:
admin:
  enabled: true
  http:
    enabled: true
  tls:
    enabled: false
env:
  admin_listen: 127.0.0.1:8001
  log_level: debug

COMPUTED VALUES:
admin:
  annotations: {}
  enabled: true
  http:
    containerPort: 8001
    enabled: true
    parameters: []
    servicePort: 8001
  ingress:
    annotations: {}
    enabled: false
    hostname: null
    ingressClassName: null
    path: /
    pathType: ImplementationSpecific
  labels: {}
  loadBalancerClass: null
  tls:
    client:
      caBundle: ""
      secretName: ""
    containerPort: 8444
    enabled: false
    parameters:
    - http2
    servicePort: 8444
  type: NodePort
autoscaling:
  behavior: {}
  enabled: false
  maxReplicas: 5
  metrics:
  - resource:
      name: cpu
      target:
        averageUtilization: 80
        type: Utilization
    type: Resource
  minReplicas: 2
  targetCPUUtilizationPercentage: null
certificates:
  admin:
    clusterIssuer: ""
    commonName: kong.example
    dnsNames: []
    enabled: true
    issuer: ""
  cluster:
    clusterIssuer: ""
    commonName: kong_clustering
    dnsNames: []
    enabled: true
    issuer: ""
  clusterIssuer: ""
  enabled: false
  issuer: ""
  portal:
    clusterIssuer: ""
    commonName: developer.example
    dnsNames: []
    enabled: true
    issuer: ""
  proxy:
    clusterIssuer: ""
    commonName: app.example
    dnsNames: []
    enabled: true
    issuer: ""
cluster:
  annotations: {}
  enabled: false
  ingress:
    annotations: {}
    enabled: false
    hostname: null
    ingressClassName: null
    path: /
    pathType: ImplementationSpecific
  labels: {}
  loadBalancerClass: null
  tls:
    containerPort: 8005
    enabled: false
    parameters: []
    servicePort: 8005
  type: ClusterIP
clusterCaSecretName: ""
clustertelemetry:
  annotations: {}
  enabled: false
  ingress:
    annotations: {}
    enabled: false
    hostname: null
    ingressClassName: null
    path: /
    pathType: ImplementationSpecific
  labels: {}
  loadBalancerClass: null
  tls:
    containerPort: 8006
    enabled: false
    parameters: []
    servicePort: 8006
  type: ClusterIP
containerSecurityContext:
  allowPrivilegeEscalation: false
  capabilities:
    drop:
    - ALL
  readOnlyRootFilesystem: true
  runAsNonRoot: true
  runAsUser: 1000
  seccompProfile:
    type: RuntimeDefault
dblessConfig:
  config: ""
  configMap: ""
  secret: ""
deployment:
  daemonset: false
  hostNetwork: false
  hostname: ""
  kong:
    enabled: true
  prefixDir:
    sizeLimit: 256Mi
  serviceAccount:
    automountServiceAccountToken: false
    create: true
  test:
    enabled: false
  tmpDir:
    sizeLimit: 1Gi
deploymentAnnotations: {}
enterprise:
  enabled: false
  portal:
    enabled: false
  rbac:
    admin_gui_auth: basic-auth
    admin_gui_auth_conf_secret: CHANGEME-admin-gui-auth-conf-secret
    enabled: false
    session_conf_secret: kong-session-config
  smtp:
    admin_emails_from: none@example.com
    admin_emails_reply_to: none@example.com
    auth:
      smtp_password_secret: CHANGEME-smtp-password
      smtp_username: ""
    enabled: false
    portal_emails_from: none@example.com
    portal_emails_reply_to: none@example.com
    smtp_admin_emails: none@example.com
    smtp_auth_type: ""
    smtp_host: smtp.example.com
    smtp_port: 587
    smtp_ssl: nil
    smtp_starttls: true
  vitals:
    enabled: true
env:
  admin_access_log: /dev/stdout
  admin_error_log: /dev/stderr
  admin_gui_access_log: /dev/stdout
  admin_gui_error_log: /dev/stderr
  admin_listen: 127.0.0.1:8001
  database: "off"
  log_level: debug
  nginx_worker_processes: "2"
  portal_api_access_log: /dev/stdout
  portal_api_error_log: /dev/stderr
  prefix: /kong_prefix/
  proxy_access_log: /dev/stdout
  proxy_error_log: /dev/stderr
  router_flavor: traditional
extraConfigMaps: []
extraLabels: {}
extraObjects: []
extraSecrets: []
image:
  effectiveSemver: null
  pullPolicy: IfNotPresent
  repository: kong
  tag: "3.5"
ingressController:
  adminApi:
    tls:
      client:
        caSecretName: ""
        certProvided: false
        enabled: false
        secretName: ""
  admissionWebhook:
    certificate:
      provided: false
    enabled: true
    failurePolicy: Ignore
    namespaceSelector: {}
    port: 8080
    service:
      labels: {}
  args: []
  enabled: true
  env:
    kong_admin_tls_skip_verify: true
  gatewayDiscovery:
    adminApiService:
      name: ""
      namespace: ""
    enabled: false
    generateAdminApiService: false
  image:
    effectiveSemver: null
    repository: kong/kubernetes-ingress-controller
    tag: "3.0"
  ingressClass: kong
  ingressClassAnnotations: {}
  konnect:
    apiHostname: us.kic.api.konghq.com
    enabled: false
    license:
      enabled: false
    runtimeGroupID: ""
    tlsClientCertSecretName: konnect-client-tls
  livenessProbe:
    failureThreshold: 3
    httpGet:
      path: /healthz
      port: 10254
      scheme: HTTP
    initialDelaySeconds: 5
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 5
  rbac:
    create: true
  readinessProbe:
    failureThreshold: 3
    httpGet:
      path: /readyz
      port: 10254
      scheme: HTTP
    initialDelaySeconds: 5
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 5
  resources: {}
  watchNamespaces: []
lifecycle:
  preStop:
    exec:
      command:
      - kong
      - quit
      - --wait=15
livenessProbe:
  failureThreshold: 3
  httpGet:
    path: /status
    port: status
    scheme: HTTP
  initialDelaySeconds: 5
  periodSeconds: 10
  successThreshold: 1
  timeoutSeconds: 5
manager:
  annotations: {}
  enabled: true
  http:
    containerPort: 8002
    enabled: true
    parameters: []
    servicePort: 8002
  ingress:
    annotations: {}
    enabled: false
    hostname: null
    ingressClassName: null
    path: /
    pathType: ImplementationSpecific
  labels: {}
  loadBalancerClass: null
  tls:
    containerPort: 8445
    enabled: true
    parameters:
    - http2
    servicePort: 8445
  type: NodePort
migrations:
  annotations:
    sidecar.istio.io/inject: false
  backoffLimit: null
  jobAnnotations: {}
  postUpgrade: true
  preUpgrade: true
  resources: {}
nodeSelector: {}
plugins: {}
podAnnotations:
  kuma.io/gateway: enabled
  traffic.sidecar.istio.io/includeInboundPorts: ""
podDisruptionBudget:
  enabled: false
podLabels: {}
podSecurityPolicy:
  annotations: {}
  enabled: false
  labels: {}
  spec:
    allowPrivilegeEscalation: false
    fsGroup:
      rule: RunAsAny
    hostIPC: false
    hostNetwork: false
    hostPID: false
    privileged: false
    readOnlyRootFilesystem: true
    runAsGroup:
      rule: RunAsAny
    runAsUser:
      rule: RunAsAny
    seLinux:
      rule: RunAsAny
    supplementalGroups:
      rule: RunAsAny
    volumes:
    - configMap
    - secret
    - emptyDir
    - projected
portal:
  annotations: {}
  enabled: true
  http:
    containerPort: 8003
    enabled: true
    parameters: []
    servicePort: 8003
  ingress:
    annotations: {}
    enabled: false
    hostname: null
    ingressClassName: null
    path: /
    pathType: ImplementationSpecific
  labels: {}
  loadBalancerClass: null
  tls:
    containerPort: 8446
    enabled: true
    parameters:
    - http2
    servicePort: 8446
  type: NodePort
portalapi:
  annotations: {}
  enabled: true
  http:
    containerPort: 8004
    enabled: true
    parameters: []
    servicePort: 8004
  ingress:
    annotations: {}
    enabled: false
    hostname: null
    ingressClassName: null
    path: /
    pathType: ImplementationSpecific
  labels: {}
  loadBalancerClass: null
  tls:
    containerPort: 8447
    enabled: true
    parameters:
    - http2
    servicePort: 8447
  type: NodePort
postgresql:
  architecture: standalone
  audit:
    clientMinMessages: error
    logConnections: false
    logDisconnections: false
    logHostname: false
    logLinePrefix: ""
    logTimezone: ""
    pgAuditLog: ""
    pgAuditLogCatalog: "off"
  auth:
    database: kong
    enablePostgresUser: true
    existingSecret: ""
    password: ""
    postgresPassword: ""
    replicationPassword: ""
    replicationUsername: repl_user
    secretKeys:
      adminPasswordKey: postgres-password
      replicationPasswordKey: replication-password
      userPasswordKey: password
    usePasswordFiles: false
    username: kong
  clusterDomain: cluster.local
  common:
    exampleValue: common-chart
    global:
      imagePullSecrets: []
      imageRegistry: ""
      postgresql:
        auth:
          database: ""
          existingSecret: ""
          password: ""
          postgresPassword: ""
          secretKeys:
            adminPasswordKey: ""
            replicationPasswordKey: ""
            userPasswordKey: ""
          username: ""
        service:
          ports:
            postgresql: ""
      storageClass: ""
  commonAnnotations: {}
  commonLabels: {}
  containerPorts:
    postgresql: 5432
  diagnosticMode:
    args:
    - infinity
    command:
    - sleep
    enabled: false
  enabled: false
  extraDeploy: []
  fullnameOverride: ""
  global:
    imagePullSecrets: []
    imageRegistry: ""
    postgresql:
      auth:
        database: ""
        existingSecret: ""
        password: ""
        postgresPassword: ""
        secretKeys:
          adminPasswordKey: ""
          replicationPasswordKey: ""
          userPasswordKey: ""
        username: ""
      service:
        ports:
          postgresql: ""
    storageClass: ""
  image:
    debug: false
    digest: ""
    pullPolicy: IfNotPresent
    pullSecrets: []
    registry: docker.io
    repository: bitnami/postgresql
    tag: 13.11.0-debian-11-r20
  kubeVersion: ""
  ldap:
    basedn: ""
    binddn: ""
    bindpw: ""
    enabled: false
    port: ""
    prefix: ""
    scheme: ""
    searchAttribute: ""
    searchFilter: ""
    server: ""
    suffix: ""
    tls:
      enabled: false
    uri: ""
  metrics:
    containerPorts:
      metrics: 9187
    containerSecurityContext:
      enabled: true
      runAsNonRoot: true
      runAsUser: 1001
    customLivenessProbe: {}
    customMetrics: {}
    customReadinessProbe: {}
    customStartupProbe: {}
    enabled: false
    extraEnvVars: []
    image:
      digest: ""
      pullPolicy: IfNotPresent
      pullSecrets: []
      registry: docker.io
      repository: bitnami/postgres-exporter
      tag: 0.11.1-debian-11-r22
    livenessProbe:
      enabled: true
      failureThreshold: 6
      initialDelaySeconds: 5
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 5
    prometheusRule:
      enabled: false
      labels: {}
      namespace: ""
      rules: []
    readinessProbe:
      enabled: true
      failureThreshold: 6
      initialDelaySeconds: 5
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 5
    resources:
      limits: {}
      requests: {}
    service:
      annotations:
        prometheus.io/port: '{{ .Values.metrics.service.ports.metrics }}'
        prometheus.io/scrape: "true"
      clusterIP: ""
      ports:
        metrics: 9187
      sessionAffinity: None
    serviceMonitor:
      enabled: false
      honorLabels: false
      interval: ""
      jobLabel: ""
      labels: {}
      metricRelabelings: []
      namespace: ""
      relabelings: []
      scrapeTimeout: ""
      selector: {}
    startupProbe:
      enabled: false
      failureThreshold: 15
      initialDelaySeconds: 10
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
  nameOverride: ""
  networkPolicy:
    egressRules:
      customRules: {}
      denyConnectionsToExternal: false
    enabled: false
    ingressRules:
      primaryAccessOnlyFrom:
        customRules: {}
        enabled: false
        namespaceSelector: {}
        podSelector: {}
      readReplicasAccessOnlyFrom:
        customRules: {}
        enabled: false
        namespaceSelector: {}
        podSelector: {}
    metrics:
      enabled: false
      namespaceSelector: {}
      podSelector: {}
  postgresqlDataDir: /bitnami/postgresql/data
  postgresqlSharedPreloadLibraries: pgaudit
  primary:
    affinity: {}
    annotations: {}
    args: []
    command: []
    configuration: ""
    containerSecurityContext:
      enabled: true
      runAsUser: 1001
    customLivenessProbe: {}
    customReadinessProbe: {}
    customStartupProbe: {}
    existingConfigmap: ""
    existingExtendedConfigmap: ""
    extendedConfiguration: ""
    extraEnvVars: []
    extraEnvVarsCM: ""
    extraEnvVarsSecret: ""
    extraPodSpec: {}
    extraVolumeMounts: []
    extraVolumes: []
    hostAliases: []
    hostIPC: false
    hostNetwork: false
    initContainers: []
    initdb:
      args: ""
      password: ""
      postgresqlWalDir: ""
      scripts: {}
      scriptsConfigMap: ""
      scriptsSecret: ""
      user: ""
    labels: {}
    lifecycleHooks: {}
    livenessProbe:
      enabled: true
      failureThreshold: 6
      initialDelaySeconds: 30
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 5
    name: primary
    nodeAffinityPreset:
      key: ""
      type: ""
      values: []
    nodeSelector: {}
    persistence:
      accessModes:
      - ReadWriteOnce
      annotations: {}
      dataSource: {}
      enabled: true
      existingClaim: ""
      labels: {}
      mountPath: /bitnami/postgresql
      selector: {}
      size: 8Gi
      storageClass: ""
      subPath: ""
    pgHbaConfiguration: ""
    podAffinityPreset: ""
    podAnnotations: {}
    podAntiAffinityPreset: soft
    podLabels: {}
    podSecurityContext:
      enabled: true
      fsGroup: 1001
    priorityClassName: ""
    readinessProbe:
      enabled: true
      failureThreshold: 6
      initialDelaySeconds: 5
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 5
    resources:
      limits: {}
      requests:
        cpu: 250m
        memory: 256Mi
    schedulerName: ""
    service:
      annotations: {}
      clusterIP: ""
      externalTrafficPolicy: Cluster
      extraPorts: []
      loadBalancerIP: ""
      loadBalancerSourceRanges: []
      nodePorts:
        postgresql: ""
      ports:
        postgresql: 5432
      sessionAffinity: None
      sessionAffinityConfig: {}
      type: ClusterIP
    sidecars: []
    standby:
      enabled: false
      primaryHost: ""
      primaryPort: ""
    startupProbe:
      enabled: false
      failureThreshold: 15
      initialDelaySeconds: 30
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    terminationGracePeriodSeconds: ""
    tolerations: []
    topologySpreadConstraints: []
    updateStrategy:
      rollingUpdate: {}
      type: RollingUpdate
  psp:
    create: false
  rbac:
    create: false
    rules: []
  readReplicas:
    affinity: {}
    annotations: {}
    args: []
    command: []
    containerSecurityContext:
      enabled: true
      runAsUser: 1001
    customLivenessProbe: {}
    customReadinessProbe: {}
    customStartupProbe: {}
    extendedConfiguration: ""
    extraEnvVars: []
    extraEnvVarsCM: ""
    extraEnvVarsSecret: ""
    extraPodSpec: {}
    extraVolumeMounts: []
    extraVolumes: []
    hostAliases: []
    hostIPC: false
    hostNetwork: false
    initContainers: []
    labels: {}
    lifecycleHooks: {}
    livenessProbe:
      enabled: true
      failureThreshold: 6
      initialDelaySeconds: 30
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 5
    name: read
    nodeAffinityPreset:
      key: ""
      type: ""
      values: []
    nodeSelector: {}
    persistence:
      accessModes:
      - ReadWriteOnce
      annotations: {}
      dataSource: {}
      enabled: true
      existingClaim: ""
      labels: {}
      mountPath: /bitnami/postgresql
      selector: {}
      size: 8Gi
      storageClass: ""
      subPath: ""
    podAffinityPreset: ""
    podAnnotations: {}
    podAntiAffinityPreset: soft
    podLabels: {}
    podSecurityContext:
      enabled: true
      fsGroup: 1001
    priorityClassName: ""
    readinessProbe:
      enabled: true
      failureThreshold: 6
      initialDelaySeconds: 5
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 5
    replicaCount: 1
    resources:
      limits: {}
      requests:
        cpu: 250m
        memory: 256Mi
    schedulerName: ""
    service:
      annotations: {}
      clusterIP: ""
      externalTrafficPolicy: Cluster
      extraPorts: []
      loadBalancerIP: ""
      loadBalancerSourceRanges: []
      nodePorts:
        postgresql: ""
      ports:
        postgresql: 5432
      sessionAffinity: None
      sessionAffinityConfig: {}
      type: ClusterIP
    sidecars: []
    startupProbe:
      enabled: false
      failureThreshold: 15
      initialDelaySeconds: 30
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    terminationGracePeriodSeconds: ""
    tolerations: []
    topologySpreadConstraints: []
    updateStrategy:
      rollingUpdate: {}
      type: RollingUpdate
  replication:
    applicationName: my_application
    numSynchronousReplicas: 0
    synchronousCommit: "off"
  service:
    ports:
      postgresql: "5432"
  serviceAccount:
    annotations: {}
    automountServiceAccountToken: true
    create: false
    name: ""
  shmVolume:
    enabled: true
    sizeLimit: ""
  tls:
    autoGenerated: false
    certCAFilename: ""
    certFilename: ""
    certKeyFilename: ""
    certificatesSecret: ""
    crlFilename: ""
    enabled: false
    preferServerCiphers: true
  volumePermissions:
    containerSecurityContext:
      runAsUser: 0
    enabled: false
    image:
      digest: ""
      pullPolicy: IfNotPresent
      pullSecrets: []
      registry: docker.io
      repository: bitnami/bitnami-shell
      tag: 11-debian-11-r45
    resources:
      limits: {}
      requests: {}
priorityClassName: ""
proxy:
  annotations: {}
  enabled: true
  http:
    containerPort: 8000
    enabled: true
    parameters: []
    servicePort: 80
  ingress:
    annotations: {}
    enabled: false
    hostname: null
    hosts: []
    ingressClassName: null
    labels: {}
    path: /
    pathType: ImplementationSpecific
  labels:
    enable-metrics: "true"
  loadBalancerClass: null
  nameOverride: ""
  stream: []
  tls:
    containerPort: 8443
    enabled: true
    parameters:
    - http2
    servicePort: 443
  type: LoadBalancer
readinessProbe:
  failureThreshold: 3
  httpGet:
    path: /status/ready
    port: status
    scheme: HTTP
  initialDelaySeconds: 5
  periodSeconds: 10
  successThreshold: 1
  timeoutSeconds: 5
replicaCount: 1
resources: {}
secretVolumes: []
securityContext: {}
serviceMonitor:
  enabled: false
status:
  enabled: true
  http:
    containerPort: 8100
    enabled: true
    parameters: []
  tls:
    containerPort: 8543
    enabled: false
    parameters: []
terminationGracePeriodSeconds: 30
tolerations: []
udpProxy:
  annotations: {}
  enabled: false
  labels: {}
  loadBalancerClass: null
  stream: []
  type: LoadBalancer
updateStrategy: {}
waitImage:
  enabled: true
  pullPolicy: IfNotPresent

HOOKS:
MANIFEST:
---
# Source: kong/templates/service-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kong-kong
  namespace: kong
  labels:
    app.kubernetes.io/name: kong
    helm.sh/chart: kong-2.33.3
    app.kubernetes.io/instance: "kong"
    app.kubernetes.io/managed-by: "Helm"
    app.kubernetes.io/version: "3.5"
---
# Source: kong/templates/admission-webhook.yaml
apiVersion: v1
kind: Secret
metadata:
  name: kong-kong-validation-webhook-ca-keypair
  namespace:  kong
  labels:
    app.kubernetes.io/name: kong
    helm.sh/chart: kong-2.33.3
    app.kubernetes.io/instance: "kong"
    app.kubernetes.io/managed-by: "Helm"
    app.kubernetes.io/version: "3.5"
type: kubernetes.io/tls
data:
    tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJekNDQWd1Z0F3SUJBZ0lRUVBuV2x5eDcwSVYzMk4xbW9aSm4wREFOQmdrcWhraUc5dzBCQVFzRkFEQWMKTVJvd0dBWURWUVFERXhGcmIyNW5MV0ZrYldsemMybHZiaTFqWVRBZUZ3MHlOREF4TVRrd016VXlNalZhRncwegpOREF4TVRZd016VXlNalZhTUJ3eEdqQVlCZ05WQkFNVEVXdHZibWN0WVdSdGFYTnphVzl1TFdOaE1JSUJJakFOCkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTF0d3V0RzhUUS9UaHFqRFRtU2VEeUxNbGtFRlkKdEJXMGQ1TzdTNkk4bElWOWR4Z245L1g3eVdCNWQvS04rdUdsMU9xZVM0cURXWWE3RWFQZHIrNy81TDZMZFJlQwowMzRYL090aTBMdkE5cE10Z1FLbFJIaURSUFZUTTBDVjFUZHhpYXhQSE1PMEJsR3B1M0xVU0QzV2ZEcTg0UHhVCkpRL3ZPM3QvSG54WlpNOVpWcHpaWHVKelZ4UllpdUs3ZjFHSGptUmoxQ0VRcGtYSllqOUkwZlFuMzRhSVlIbU0KUWJDS200aE11cFlYNTA0WS8vTEN5ZVRJYjIxdmNXRS9GT2xJeUNDbE9FN21FYm8vMnN0bnF0d21XVjcwc2dIegowdjhWYlBxOGt3MmJCNEE0NzViK0h6alk3UjQxR0xvZXlPZnlQYVNkV2w5WTJUaHpGNVVjcGpOSlhRSURBUUFCCm8yRXdYekFPQmdOVkhROEJBZjhFQkFNQ0FxUXdIUVlEVlIwbEJCWXdGQVlJS3dZQkJRVUhBd0VHQ0NzR0FRVUYKQndNQ01BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZHMnUvbmo3eVV0eS95UTlwRG9ybE1jVApwNUF0TUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFCb2l2MWkweFJmZWh1dCszR3JaYjlkYm1sVngzdk5WK1V1CjlJT0dRLzliRVVMWVpKaENFRGJ3c25sN2lid0ZRTmxaN2tVQUVGOEg1c1I2QmhjRlNaQ3NkdVo1cUtEYnFYVnYKWjRsdWxjalJaV0pvYkZ5elkxMVR3RlJqZ0JvQXNhWjJ6MjVoVXpPTzBMemw3bnhxNVBzb055M2ZWNHBSZGVLQgpLdWNBRGtrdWFWUktyT2tNNGt0YUw2WEhScGk2Z0JxcEJ2a2xJalM2UUlSck04QVdBSkdXbEQ3VWNSRU1PNFZECkRCWG12UlVSUThSallYTFpFZlBoMDEyZmZ0UGNXK3NZTVA5ZitIY3J2RUxNSk4zVGNMNzNQUHhUMWpwYnlhSWUKVVM3MzU0aUc0YUlteHlRcDI3eFVBdGQxdTlzNzk3YUZXZTB2cGEwWUZrTWp3SUJFUWdXRgotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBMXR3dXRHOFRRL1RocWpEVG1TZUR5TE1sa0VGWXRCVzBkNU83UzZJOGxJVjlkeGduCjkvWDd5V0I1ZC9LTit1R2wxT3FlUzRxRFdZYTdFYVBkcis3LzVMNkxkUmVDMDM0WC9PdGkwTHZBOXBNdGdRS2wKUkhpRFJQVlRNMENWMVRkeGlheFBITU8wQmxHcHUzTFVTRDNXZkRxODRQeFVKUS92TzN0L0hueFpaTTlaVnB6WgpYdUp6VnhSWWl1SzdmMUdIam1SajFDRVFwa1hKWWo5STBmUW4zNGFJWUhtTVFiQ0ttNGhNdXBZWDUwNFkvL0xDCnllVEliMjF2Y1dFL0ZPbEl5Q0NsT0U3bUViby8yc3RucXR3bVdWNzBzZ0h6MHY4VmJQcThrdzJiQjRBNDc1YisKSHpqWTdSNDFHTG9leU9meVBhU2RXbDlZMlRoekY1VWNwak5KWFFJREFRQUJBb0lCQUQwSUhsd3lrUTVrcVJWbQorVFF2L1VjdFhDWTE2YlI5MWQyQm9WcENvMktzNkk3RDFkYWhrUHdLNDNZbStCMmpxeTluMWI4dmdWQVU3VjU5ClphTnNDRlE1cS9OKzBqS3hScThaVGVCczlNc1YwMzhwK1RnUjQzZmJGOThmSVhDSFowRHNLU3pLaW9DaEFjMjEKT0llc3lSaFF0d1pScHJWQWFYeEVBRC93b3BQM3JPakRFWmkxalFzdGF2cDlFMWd5QVBlV2owWHhBSEI1RUZGaApJVjJmRVV4N0pSTFFGd1RwYXNXRElTMHkrUjlRR3B1ZEhxQjFnVkVXYm9wQWpMU2hYNldxRy9rNmNMdHRoT2lYClRENlA2d2dtTDdjVE5OeHZZUVRzZ3VEUEo1eHdkMEIwYzRtcmJWUUhUN21uUVBhTjkzbzV1MXNZS2k5YVZua1EKMHZHaWZpRUNnWUVBNUxrUVRQdmdvMTZNaTBkVnhPeXdoejRsMVlmbXF4T2FaQVhSMjJiNGJpWDJkZko5dVlTeQozRUxTTTNXRWtObS9LalpHOVR3WDVwa0NCZFdJblRrUnlDQ21uc3hoRlpOK0ZmdlNJREdHS1hKZ0U2d2lFUSttCkg2UVZsRmxHZDA2aCtmazk5cmRaeGtNblpCNjIwN0dnL3N0bmp5MXBqUHdrMzZGcmRRTXhSdnNDZ1lFQThIdmkKcC9LeUZYMHNFNUE2bCtPaWhvSGQzc1ByR1lpalBuNHVkbGlhN2RKNWYzc0JGQ0NMN3B2WkdkQzZUM2NMT0k0bwphcStFZUNKeEkybEF1QUtYdnd0MFNDblZRU3oxcEFMajNzVm9Wby9aUWFia3JCZHk5VWtZR0lmYjdSczM5emlpCmRtOUJxSUpMc2RSbkNmNVJwamJzeDNzaGdrV0d5aGRYWFhSSm9ZY0NnWUVBdkN3OWF2aTJ3ZkdoczF6SEJiS3QKTVRkQ0xVRVgxNXZUSTROZU9pR25OZ2Zwa3ZRajE2T0MrNC9HSEN3TkdwYnFuYkgyQXdDanNVWWswZVB4OTFmaQpkMEhWazBRV2c0Zks3ZzgxdXVMRHZBbXJYY1A2YXdyeTQ0azliOFZiSWdFQlpnVldvMG9KaEFIdndJRThiVUh3CmNHK3NEYkdRNnpydW8wWE1nSUpWNGswQ2dZRUFwK3lxQ2NLajdmTjVDclFrNWhrVFRUOXo4WEQzUXQ0eHQ1cWUKMFE3d0tHOVhYZGhEbVkxY2lTS1VoNzFEeStlQmsxMVpCWjVJTHlkRnY0ZG9wTlZTcHhuVmVlcVVPaTJ0M1hnVApMR1RHaGVOdXZyUk9hNGo0UWlWblNRSGRaWVVqSUdPUXRvamIzVklXanplVk45bzVvNG9vN3VhaE1IbGlOTTMxCnVKRlNOUk1DZ1lCWE5ocFNQVEpTemk4M0hoSXMybGpOczQ2S1I0L3h0YytiUmJyNWZyOEEwS2NibE1wMGxXWEQKNHkyc09yWTFJK3RzcDE0WkNTbndIdUhjdzFKZnRMUVJweEdsVURxc1g1a0FUV2JPWFlQN0JhOU5DazVRRmRkNQozUm9JV1E3L0FIdnZ4YlRhS1o0M2NnMXM5M3Z1d0EvbzZhWnZ4UDRZM0Rtbjg0VmNLQ3pMOEE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
---
# Source: kong/templates/admission-webhook.yaml
apiVersion: v1
kind: Secret
metadata:
  name: kong-kong-validation-webhook-keypair
  namespace:  kong
  labels:
    app.kubernetes.io/name: kong
    helm.sh/chart: kong-2.33.3
    app.kubernetes.io/instance: "kong"
    app.kubernetes.io/managed-by: "Helm"
    app.kubernetes.io/version: "3.5"
type: kubernetes.io/tls
data:
  tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURhekNDQWxPZ0F3SUJBZ0lSQVB1RC9oNkwwZHNQQWRlOWE2eU1tbWt3RFFZSktvWklodmNOQVFFTEJRQXcKSERFYU1CZ0dBMVVFQXhNUmEyOXVaeTFoWkcxcGMzTnBiMjR0WTJFd0hoY05NalF3TVRFNU1ETTFNakkyV2hjTgpNelF3TVRFMk1ETTFNakkyV2pBd01TNHdMQVlEVlFRREV5VnJiMjVuTFd0dmJtY3RkbUZzYVdSaGRHbHZiaTEzClpXSm9iMjlyTG10dmJtY3VjM1pqTUlJQklqQU5CZ2txaGtpRzl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUEKck9ra3Rvb1JsdU5BN2Q5Y3AwMmtSSC9MenRYZ0ZMcURhbWNsTHpiTUFpNklRcmpHVDUvbUxuZWV1c084SlI5cApXVGp4MWg3Z2tPNHRta3pxRFJMTnp2K1dEY25GY1NPbXRLMFpyT2pvajBXYWM0TUpCZnhQcFBlaS9WV0dKak4yCmJoSzJMTkxkZnBCRkZYV3ZmTU14aEVkQ2hpQlJXcEFxZGY5NUlQU2N2MzdyOFlhcGZNM1M4cGVtRXIvSU1xOEkKdzVBZmgwWFlJTjc4cEEwTEgxM1dPQnlJUjFrR3hpaEZLL3FjcnhBMHNlMWZQUWgxdjhQMkxhYWc2bHpnMkVzdQp1WlhmYU1oVG4rY24vbE11YUkxU3RxZ1pjL3ROK3ZLOU8wMktMSkVXcy82dEpJcHhJNkRxb0pFeTh2N0FlOW1qCkYvNEI1QU5EVGY4NE04ZGU3WEFjRlFJREFRQUJvNEdUTUlHUU1BNEdBMVVkRHdFQi93UUVBd0lGb0RBZEJnTlYKSFNVRUZqQVVCZ2dyQmdFRkJRY0RBUVlJS3dZQkJRVUhBd0l3REFZRFZSMFRBUUgvQkFJd0FEQWZCZ05WSFNNRQpHREFXZ0JSdHJ2NTQrOGxMY3Y4a1BhUTZLNVRIRTZlUUxUQXdCZ05WSFJFRUtUQW5naVZyYjI1bkxXdHZibWN0CmRtRnNhV1JoZEdsdmJpMTNaV0pvYjI5ckxtdHZibWN1YzNaak1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQUsKVmNwdmpNaWU1ZDBTeE1Nb1hnbGxrRElpU2tmbHlnQmRTbjhvZlg2bG5RRmRDZytDYjRZd3dVQWRaeFE1MUJ5NwpqVjZXRlNPVW5WejYyNGQ2b1JiMVgvNERlaThBajQ5bUI0L3ZBZUNEc21tTU5xQkF4bUQ1UTA2cVByYXBsdk1qClY0MlNhM1k2WUxId2NGTTRwVUIzQWt2T3hGR0RlaGJMcmV1aUZlQWZKSjJrYzI0VTljby9XMnpFTXpmNkFYSnkKZ0VmWWJaUGM5SUtvanNUSHFURVZ3dmQ4UEJUOGNHNzIxbGlSWE55OEhxZU5NZXQ2VTZQWThJYkwvOHZNeXVKTwpzVEluWGNrRERJeklhaXZGa1RMdXg0Smp2WVc5UG1uNGNab2dxcmEzK0htbUprN3JMTitqaUNBUFRFUUJrdUE4CkRNeHgza3AvbjZ5NTBKa0xhcENqCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
  tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBck9ra3Rvb1JsdU5BN2Q5Y3AwMmtSSC9MenRYZ0ZMcURhbWNsTHpiTUFpNklRcmpHClQ1L21MbmVldXNPOEpSOXBXVGp4MWg3Z2tPNHRta3pxRFJMTnp2K1dEY25GY1NPbXRLMFpyT2pvajBXYWM0TUoKQmZ4UHBQZWkvVldHSmpOMmJoSzJMTkxkZnBCRkZYV3ZmTU14aEVkQ2hpQlJXcEFxZGY5NUlQU2N2MzdyOFlhcApmTTNTOHBlbUVyL0lNcThJdzVBZmgwWFlJTjc4cEEwTEgxM1dPQnlJUjFrR3hpaEZLL3FjcnhBMHNlMWZQUWgxCnY4UDJMYWFnNmx6ZzJFc3V1WlhmYU1oVG4rY24vbE11YUkxU3RxZ1pjL3ROK3ZLOU8wMktMSkVXcy82dEpJcHgKSTZEcW9KRXk4djdBZTltakYvNEI1QU5EVGY4NE04ZGU3WEFjRlFJREFRQUJBb0lCQUJtK3hRYWczQ21aUUt1cQpYRU5VM2lhTTJLMjlUcFlIaDFXcWNmRHJ1Y2lCWVN4K0VwajhkK3RuU0MwS3c5TExNSVptWWl1OWdUWlRJRnNmCkpLSTVzSWNucXhIMmZ2MXZqM3pMWVUxTGlJVDhtaGlrNUEwT0dsVzN3WDd5NHZ5QklSc1draVZ1YUNoV0Z0TXgKS0tYczhreDl4N3ZzeC9BeUV3QnF2NEJXcTVnZmNSZlBObzl4dmgyMks5NUpyTEpleW9ZN1IzaUNUangyMGhyMAowdm5LWEFwTGVaVm9RUGpxcDJ2RUNMdmVEdUN1aGljSElIL3k0ZTNZSll1TFduNDZudFJtYVYxb2ZyVUdHSFVRCjZDaE1uS2dySU9lYXB6MVNCSXhzYTREa3VmcXc0N3o0S2lNVSt3dE1HYkgyVWo5STVLNDA4L2Nacmt4d1dhMUwKWmxjM2g2a0NnWUVBMkwxUW9Cc3BCc0djWlZ0NzczR3NFd1l4MkozQnVaajZYQjBTdmUycFVuT0x3MjYwbnJjRgp0K2hHSFhLREkrRFl1ODI3MktvcWw2WE5kWmVVaHRjTmNJTEN3bGk1SHgzdTM4cng4MGMrS2VBTmt2TmtEUUE3CmpXc0FUVldQNWF0MmcwTlNzR2ZnNHpnaVMzQkFFNzJzbFZPTGxRdUtaa1lWeTlYcnpMeVZQWHNDZ1lFQXpEdGsKTTBtYXlaVUlwTU9Bb2lHNkRsSTY2NUZmYkdBeXZWYkpQbnlMWWdNNkpRQU5JRTFmdVFOa05JV3NVWENvS1ZsdwpNMVJlZ0ZyR1dUNFdBWC9TVDBRc0FPUWxXQ1cwK2hoZjR4dE90dlZiUld1cXdCMUNtalgyZEY3eW9tK1BTY2o2CnM1VlFHNHlPQzV1NFVnaHhlYXMzK0NiRUxGeXZtVHpUY3FQc3I2OENnWUVBbjkza0huUnFLb1djcWxaMGNBVlUKZXlQU21JaWtZQldxZFU4c2g5TkpWWHZNMTNaTTI5VDc3czd4Q0w5eVk0QngzUFMvWGUwR1JaMFNrMjRmSytacwpEMVVqK3Q0ZWpna3lMUGd3eHRVQjBUbG1TY0lsUmtHcHE0SUZVd1dOZ2thYXYrOWtpcUhVaTBUWVp2U0JEdzZVCndnQkJzTW8yWjRIQ2lmdGNWa096Z1FrQ2dZQWMzMVNXRDVUTFpMOVpFNjV1dlZmaFNHeTkrc3BEdHdIVlZKeVUKc2VTK2tYZzUzTnorTVJJVVJNOTR3V0VRRGw0bm9sWkRXMjBVdGtDT1EwRzNLb3ZmMnVKaHFkOUJxK3IrNUUxQgovUTFPdmpjT0JGK2FVMGlrSm5iV0VzbzRmbzhDUG1CNjNPUDdVUTZQdzQ3MlFlMVE1d3k5anpWeWxCUGJGUWRMCmtMTVlUd0tCZ0RUTXRzSGxweTU5UXBKV1lvc3c1bzNydXdNazdKMWQremZmbXJJQXNhdXJya0FYclMzSWx3elAKZnlFMGM3NngxWjFPQlBFa0trVGsyLzNtY2grRVZIMGpDV0graUNzTHNXOUM3ZG5YRFBleElYajNxMDdwU3hkMgo4bjJBNVZEdllwM3lnRlc1dkZBN0JwdEJEcnlrdmtnS0ZlempYM3I5Ui8xMmdlOUt4M3BzCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
---
# Source: kong/templates/controller-rbac-resources.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/name: kong
    helm.sh/chart: kong-2.33.3
    app.kubernetes.io/instance: "kong"
    app.kubernetes.io/managed-by: "Helm"
    app.kubernetes.io/version: "3.5"
  name: kong-kong
rules:

- apiGroups:
  - configuration.konghq.com
  resources:
  - kongupstreampolicies
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - configuration.konghq.com
  resources:
  - kongupstreampolicies/status
  verbs:
  - get
  - patch
  - update
- apiGroups:
  - configuration.konghq.com
  resources:
  - kongconsumergroups
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - configuration.konghq.com
  resources:
  - kongconsumergroups/status
  verbs:
  - get
  - patch
  - update
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - services
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - services/status
  verbs:
  - get
  - patch
  - update
- apiGroups:
  - configuration.konghq.com
  resources:
  - ingressclassparameterses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - configuration.konghq.com
  resources:
  - kongconsumers
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - configuration.konghq.com
  resources:
  - kongconsumers/status
  verbs:
  - get
  - patch
  - update
- apiGroups:
  - configuration.konghq.com
  resources:
  - kongingresses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - configuration.konghq.com
  resources:
  - kongingresses/status
  verbs:
  - get
  - patch
  - update
- apiGroups:
  - configuration.konghq.com
  resources:
  - kongplugins
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - configuration.konghq.com
  resources:
  - kongplugins/status
  verbs:
  - get
  - patch
  - update
- apiGroups:
  - configuration.konghq.com
  resources:
  - tcpingresses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - configuration.konghq.com
  resources:
  - tcpingresses/status
  verbs:
  - get
  - patch
  - update
- apiGroups:
  - configuration.konghq.com
  resources:
  - udpingresses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - configuration.konghq.com
  resources:
  - udpingresses/status
  verbs:
  - get
  - patch
  - update
- apiGroups:
  - extensions
  resources:
  - ingresses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - extensions
  resources:
  - ingresses/status
  verbs:
  - get
  - patch
  - update
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses/status
  verbs:
  - get
  - patch
  - update
- apiGroups:
  - discovery.k8s.io
  resources:
  - endpointslices
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - configuration.konghq.com
  resources:
  - kongclusterplugins
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - configuration.konghq.com
  resources:
  - kongclusterplugins/status
  verbs:
  - get
  - patch
  - update
- apiGroups:
  - apiextensions.k8s.io
  resources:
  - customresourcedefinitions
  verbs:
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingressclasses
  verbs:
  - get
  - list
  - watch
---
# Source: kong/templates/controller-rbac-resources.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kong-kong
  labels:
    app.kubernetes.io/name: kong
    helm.sh/chart: kong-2.33.3
    app.kubernetes.io/instance: "kong"
    app.kubernetes.io/managed-by: "Helm"
    app.kubernetes.io/version: "3.5"
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kong-kong
subjects:
  - kind: ServiceAccount
    name: kong-kong
    namespace: kong
---
# Source: kong/templates/controller-rbac-resources.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: kong-kong
  namespace: kong
  labels:
    app.kubernetes.io/name: kong
    helm.sh/chart: kong-2.33.3
    app.kubernetes.io/instance: "kong"
    app.kubernetes.io/managed-by: "Helm"
    app.kubernetes.io/version: "3.5"
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<kong-ingress-controller-leader-nginx>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "kong-ingress-controller-leader-kong-kong"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  # Begin KIC 2.x leader permissions
  - apiGroups:
      - ""
      - coordination.k8s.io
    resources:
      - configmaps
      - leases
    verbs:
      - get
      - list
      - watch
      - create
      - update
      - patch
      - delete
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
---
# Source: kong/templates/controller-rbac-resources.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: kong-kong
  namespace: kong
  labels:
    app.kubernetes.io/name: kong
    helm.sh/chart: kong-2.33.3
    app.kubernetes.io/instance: "kong"
    app.kubernetes.io/managed-by: "Helm"
    app.kubernetes.io/version: "3.5"
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kong-kong
subjects:
  - kind: ServiceAccount
    name: kong-kong
    namespace: kong
---
# Source: kong/templates/admission-webhook.yaml
apiVersion: v1
kind: Service
metadata:
  name: kong-kong-validation-webhook
  namespace: kong
  labels:
    app.kubernetes.io/name: kong
    helm.sh/chart: kong-2.33.3
    app.kubernetes.io/instance: "kong"
    app.kubernetes.io/managed-by: "Helm"
    app.kubernetes.io/version: "3.5"
spec:
  ports:
  - name: webhook
    port: 443
    protocol: TCP
    targetPort: webhook
  selector:
    app.kubernetes.io/name: kong
    helm.sh/chart: kong-2.33.3
    app.kubernetes.io/instance: "kong"
    app.kubernetes.io/managed-by: "Helm"
    app.kubernetes.io/version: "3.5"
    app.kubernetes.io/component: app
---
# Source: kong/templates/service-kong-admin.yaml
apiVersion: v1
kind: Service
metadata:
  name: kong-kong-admin
  namespace: kong
  labels:
    app.kubernetes.io/name: kong
    helm.sh/chart: kong-2.33.3
    app.kubernetes.io/instance: "kong"
    app.kubernetes.io/managed-by: "Helm"
    app.kubernetes.io/version: "3.5"
spec:
  type: NodePort
  ports:
  - name: kong-admin
    port: 8001
    targetPort: 8001
    protocol: TCP
  selector:
    app.kubernetes.io/name: kong
    app.kubernetes.io/component: app
    app.kubernetes.io/instance: "kong"
---
# Source: kong/templates/service-kong-manager.yaml
apiVersion: v1
kind: Service
metadata:
  name: kong-kong-manager
  namespace: kong
  labels:
    app.kubernetes.io/name: kong
    helm.sh/chart: kong-2.33.3
    app.kubernetes.io/instance: "kong"
    app.kubernetes.io/managed-by: "Helm"
    app.kubernetes.io/version: "3.5"
spec:
  type: NodePort
  ports:
  - name: kong-manager
    port: 8002
    targetPort: 8002
    protocol: TCP
  - name: kong-manager-tls
    port: 8445
    targetPort: 8445
    protocol: TCP
  selector:
    app.kubernetes.io/name: kong
    app.kubernetes.io/component: app
    app.kubernetes.io/instance: "kong"
---
# Source: kong/templates/service-kong-proxy.yaml
apiVersion: v1
kind: Service
metadata:
  name: kong-kong-proxy
  namespace: kong
  labels:
    app.kubernetes.io/name: kong
    helm.sh/chart: kong-2.33.3
    app.kubernetes.io/instance: "kong"
    app.kubernetes.io/managed-by: "Helm"
    app.kubernetes.io/version: "3.5"
    enable-metrics: "true"
spec:
  type: LoadBalancer
  ports:
  - name: kong-proxy
    port: 80
    targetPort: 8000
    protocol: TCP
  - name: kong-proxy-tls
    port: 443
    targetPort: 8443
    protocol: TCP
  selector:
    app.kubernetes.io/name: kong
    app.kubernetes.io/component: app
    app.kubernetes.io/instance: "kong"
---
# Source: kong/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kong-kong
  namespace:  kong
  labels:
    app.kubernetes.io/name: kong
    helm.sh/chart: kong-2.33.3
    app.kubernetes.io/instance: "kong"
    app.kubernetes.io/managed-by: "Helm"
    app.kubernetes.io/version: "3.5"
    app.kubernetes.io/component: app
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: kong
      app.kubernetes.io/component: app
      app.kubernetes.io/instance: "kong"

  template:
    metadata:
      annotations:
        kuma.io/service-account-token-volume: kong-kong-token
        kuma.io/gateway: "enabled"
        traffic.sidecar.istio.io/includeInboundPorts: ""
      labels:
        app.kubernetes.io/name: kong
        helm.sh/chart: kong-2.33.3
        app.kubernetes.io/instance: "kong"
        app.kubernetes.io/managed-by: "Helm"
        app.kubernetes.io/version: "3.5"
        app.kubernetes.io/component: app
        app: kong-kong
        version: "3.5"
    spec:
      serviceAccountName: kong-kong
      automountServiceAccountToken: false
      
      initContainers:
      - name: clear-stale-pid
        image: kong:3.5
        imagePullPolicy: IfNotPresent
        securityContext:
        
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
          seccompProfile:
            type: RuntimeDefault
        resources:
          {}
        command:
        - "rm"
        - "-vrf"
        - "$KONG_PREFIX/pids"
        env:
         
        - name: KONG_ADMIN_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_ADMIN_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_ADMIN_GUI_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_ADMIN_GUI_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_ADMIN_LISTEN
          value: "127.0.0.1:8001"
        - name: KONG_CLUSTER_LISTEN
          value: "off"
        - name: KONG_DATABASE
          value: "off"
        - name: KONG_KIC
          value: "on"
        - name: KONG_LOG_LEVEL
          value: "debug"
        - name: KONG_LUA_PACKAGE_PATH
          value: "/opt/?.lua;/opt/?/init.lua;;"
        - name: KONG_NGINX_WORKER_PROCESSES
          value: "2"
        - name: KONG_PLUGINS
          value: "bundled"
        - name: KONG_PORTAL_API_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_PORTAL_API_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_PORT_MAPS
          value: "80:8000, 443:8443"
        - name: KONG_PREFIX
          value: "/kong_prefix/"
        - name: KONG_PROXY_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_PROXY_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_PROXY_LISTEN
          value: "0.0.0.0:8000, 0.0.0.0:8443 http2 ssl"
        - name: KONG_PROXY_STREAM_ACCESS_LOG
          value: "/dev/stdout basic"
        - name: KONG_PROXY_STREAM_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_ROUTER_FLAVOR
          value: "traditional"
        - name: KONG_STATUS_ACCESS_LOG
          value: "off"
        - name: KONG_STATUS_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_STATUS_LISTEN
          value: "0.0.0.0:8100"
        - name: KONG_STREAM_LISTEN
          value: "off"
        volumeMounts:
        - name: kong-kong-prefix-dir
          mountPath: /kong_prefix/
        - name: kong-kong-tmp
          mountPath: /tmp
      containers:
      - name: ingress-controller
        securityContext:
      
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
          seccompProfile:
            type: RuntimeDefault
        args:
        
        ports:
        - name: webhook
          containerPort: 8080
          protocol: TCP
        - name: cmetrics
          containerPort: 10255
          protocol: TCP
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace  
        
        
        
        
        
        
        - name: CONTROLLER_ADMISSION_WEBHOOK_LISTEN
          value: "0.0.0.0:8080"
        - name: CONTROLLER_ELECTION_ID
          value: "kong-ingress-controller-leader-kong"
        - name: CONTROLLER_INGRESS_CLASS
          value: "kong"
        - name: CONTROLLER_KONG_ADMIN_TLS_SKIP_VERIFY
          value: "true"
        - name: CONTROLLER_KONG_ADMIN_URL
          value: "http://localhost:8001"
        - name: CONTROLLER_PUBLISH_SERVICE
          value: "kong/kong-kong-proxy"
        image: kong/kubernetes-ingress-controller:3.0
        imagePullPolicy: IfNotPresent
      
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 5
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 5
        resources:
          {}
        volumeMounts:
        - name: webhook-cert
          mountPath: /admission-webhook
          readOnly: true
        - name: kong-kong-token
          mountPath: /var/run/secrets/kubernetes.io/serviceaccount
          readOnly: true
        
        
      
      - name: "proxy"
        image: kong:3.5
        imagePullPolicy: IfNotPresent
        securityContext:
        
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
          seccompProfile:
            type: RuntimeDefault
        env:
         
        - name: KONG_ADMIN_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_ADMIN_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_ADMIN_GUI_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_ADMIN_GUI_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_ADMIN_LISTEN
          value: "127.0.0.1:8001"
        - name: KONG_CLUSTER_LISTEN
          value: "off"
        - name: KONG_DATABASE
          value: "off"
        - name: KONG_KIC
          value: "on"
        - name: KONG_LOG_LEVEL
          value: "debug"
        - name: KONG_LUA_PACKAGE_PATH
          value: "/opt/?.lua;/opt/?/init.lua;;"
        - name: KONG_NGINX_WORKER_PROCESSES
          value: "2"
        - name: KONG_PLUGINS
          value: "bundled"
        - name: KONG_PORTAL_API_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_PORTAL_API_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_PORT_MAPS
          value: "80:8000, 443:8443"
        - name: KONG_PREFIX
          value: "/kong_prefix/"
        - name: KONG_PROXY_ACCESS_LOG
          value: "/dev/stdout"
        - name: KONG_PROXY_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_PROXY_LISTEN
          value: "0.0.0.0:8000, 0.0.0.0:8443 http2 ssl"
        - name: KONG_PROXY_STREAM_ACCESS_LOG
          value: "/dev/stdout basic"
        - name: KONG_PROXY_STREAM_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_ROUTER_FLAVOR
          value: "traditional"
        - name: KONG_STATUS_ACCESS_LOG
          value: "off"
        - name: KONG_STATUS_ERROR_LOG
          value: "/dev/stderr"
        - name: KONG_STATUS_LISTEN
          value: "0.0.0.0:8100"
        - name: KONG_STREAM_LISTEN
          value: "off"
        - name: KONG_NGINX_DAEMON
          value: "off"
        lifecycle:
          preStop:
            exec:
              command:
              - kong
              - quit
              - --wait=15
        ports:
        - name: admin
          containerPort: 8001
          protocol: TCP
        - name: proxy
          containerPort: 8000
          protocol: TCP
        - name: proxy-tls
          containerPort: 8443
          protocol: TCP
        - name: status
          containerPort: 8100
          protocol: TCP
        volumeMounts:
          - name: kong-kong-prefix-dir
            mountPath: /kong_prefix/
          - name: kong-kong-tmp
            mountPath: /tmp
          
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /status/ready
            port: status
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 5
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /status
            port: status
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 5
        resources:
          {} 
      securityContext:
        {}
      terminationGracePeriodSeconds: 30
      volumes:
        - name: kong-kong-prefix-dir
          emptyDir:
            sizeLimit: 256Mi
        - name: kong-kong-tmp
          emptyDir:
            sizeLimit: 1Gi
        - name: kong-kong-token
          projected:
            sources:
            - serviceAccountToken:
                expirationSeconds: 3607
                path: token
            - configMap:
                items:
                - key: ca.crt
                  path: ca.crt
                name: kube-root-ca.crt
            - downwardAPI:
                items:
                - fieldRef:
                    apiVersion: v1
                    fieldPath: metadata.namespace
                  path: namespace
        - name: webhook-cert
          secret:
            secretName: kong-kong-validation-webhook-keypair
---
# Source: kong/templates/ingress-class.yaml
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: kong
  labels:
    app.kubernetes.io/name: kong
    helm.sh/chart: kong-2.33.3
    app.kubernetes.io/instance: "kong"
    app.kubernetes.io/managed-by: "Helm"
    app.kubernetes.io/version: "3.5"
spec:
  controller: ingress-controllers.konghq.com/kong
---
# Source: kong/templates/admission-webhook.yaml
kind: ValidatingWebhookConfiguration
apiVersion: admissionregistration.k8s.io/v1
metadata:
  name: kong-kong-validations
  namespace: kong
  labels:
    app.kubernetes.io/name: kong
    helm.sh/chart: kong-2.33.3
    app.kubernetes.io/instance: "kong"
    app.kubernetes.io/managed-by: "Helm"
    app.kubernetes.io/version: "3.5"
webhooks:
- name: validations.kong.konghq.com
  objectSelector:
    matchExpressions:
    - key: owner
      operator: NotIn
      values:
      - helm
  failurePolicy: Ignore
  sideEffects: None
  admissionReviewVersions: ["v1beta1"]
  rules:
  - apiGroups:
    - configuration.konghq.com
    apiVersions:
    - '*'
    operations:
    - CREATE
    - UPDATE
    resources:
    - kongconsumers
    - kongplugins
    - kongclusterplugins
    - kongingresses
  - apiGroups:
    - ''
    apiVersions:
    - 'v1'
    operations:
    - CREATE
    - UPDATE
    resources:
    - secrets
    - services
  - apiGroups:
    - networking.k8s.io
    apiVersions:
      - 'v1'
    operations:
    - CREATE
    - UPDATE
    resources:
    - ingresses
  - apiGroups:
    - gateway.networking.k8s.io
    apiVersions:
    - 'v1alpha2'
    - 'v1beta1'
    - 'v1'
    operations:
    - CREATE
    - UPDATE
    resources:
    - gateways
    - httproutes
  clientConfig:
    caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJekNDQWd1Z0F3SUJBZ0lRUVBuV2x5eDcwSVYzMk4xbW9aSm4wREFOQmdrcWhraUc5dzBCQVFzRkFEQWMKTVJvd0dBWURWUVFERXhGcmIyNW5MV0ZrYldsemMybHZiaTFqWVRBZUZ3MHlOREF4TVRrd016VXlNalZhRncwegpOREF4TVRZd016VXlNalZhTUJ3eEdqQVlCZ05WQkFNVEVXdHZibWN0WVdSdGFYTnphVzl1TFdOaE1JSUJJakFOCkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTF0d3V0RzhUUS9UaHFqRFRtU2VEeUxNbGtFRlkKdEJXMGQ1TzdTNkk4bElWOWR4Z245L1g3eVdCNWQvS04rdUdsMU9xZVM0cURXWWE3RWFQZHIrNy81TDZMZFJlQwowMzRYL090aTBMdkE5cE10Z1FLbFJIaURSUFZUTTBDVjFUZHhpYXhQSE1PMEJsR3B1M0xVU0QzV2ZEcTg0UHhVCkpRL3ZPM3QvSG54WlpNOVpWcHpaWHVKelZ4UllpdUs3ZjFHSGptUmoxQ0VRcGtYSllqOUkwZlFuMzRhSVlIbU0KUWJDS200aE11cFlYNTA0WS8vTEN5ZVRJYjIxdmNXRS9GT2xJeUNDbE9FN21FYm8vMnN0bnF0d21XVjcwc2dIegowdjhWYlBxOGt3MmJCNEE0NzViK0h6alk3UjQxR0xvZXlPZnlQYVNkV2w5WTJUaHpGNVVjcGpOSlhRSURBUUFCCm8yRXdYekFPQmdOVkhROEJBZjhFQkFNQ0FxUXdIUVlEVlIwbEJCWXdGQVlJS3dZQkJRVUhBd0VHQ0NzR0FRVUYKQndNQ01BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZHMnUvbmo3eVV0eS95UTlwRG9ybE1jVApwNUF0TUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFCb2l2MWkweFJmZWh1dCszR3JaYjlkYm1sVngzdk5WK1V1CjlJT0dRLzliRVVMWVpKaENFRGJ3c25sN2lid0ZRTmxaN2tVQUVGOEg1c1I2QmhjRlNaQ3NkdVo1cUtEYnFYVnYKWjRsdWxjalJaV0pvYkZ5elkxMVR3RlJqZ0JvQXNhWjJ6MjVoVXpPTzBMemw3bnhxNVBzb055M2ZWNHBSZGVLQgpLdWNBRGtrdWFWUktyT2tNNGt0YUw2WEhScGk2Z0JxcEJ2a2xJalM2UUlSck04QVdBSkdXbEQ3VWNSRU1PNFZECkRCWG12UlVSUThSallYTFpFZlBoMDEyZmZ0UGNXK3NZTVA5ZitIY3J2RUxNSk4zVGNMNzNQUHhUMWpwYnlhSWUKVVM3MzU0aUc0YUlteHlRcDI3eFVBdGQxdTlzNzk3YUZXZTB2cGEwWUZrTWp3SUJFUWdXRgotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    service:
      name: kong-kong-validation-webhook
      namespace: kong

NOTES:
To connect to Kong, please execute the following commands:

HOST=$(kubectl get svc --namespace kong kong-kong-proxy -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
PORT=$(kubectl get svc --namespace kong kong-kong-proxy -o jsonpath='{.spec.ports[0].port}')
export PROXY_IP=${HOST}:${PORT}
curl $PROXY_IP

Once installed, please follow along the getting started guide to start using
Kong: https://docs.konghq.com/kubernetes-ingress-controller/latest/guides/getting-started/

@rainest
Copy link
Contributor

rainest commented Jan 22, 2024

after add args, run helm -n kong install kong kong/kong -f ./charts/config.yaml --debug i got this in proxy container(args=["kong", "docker-start", "--vv"]):

Error: 
/usr/local/share/lua/5.1/kong/cmd/utils/inject_confs.lua:28: 
stack traceback:
	[C]: in function 'assert'
	/usr/local/share/lua/5.1/kong/cmd/utils/inject_confs.lua:28: in function 'load_conf'
	/usr/local/share/lua/5.1/kong/cmd/utils/inject_confs.lua:85: in function </usr/local/share/lua/5.1/kong/cmd/utils/inject_confs.lua:84>
	[C]: in function 'xpcall'
	/usr/local/bin/kong:99: in function 'file_gen'
	init_worker_by_lua(nginx.conf:189):51: in function <init_worker_by_lua(nginx.conf:189):49>
	[C]: in function 'xpcall'
	init_worker_by_lua(nginx.conf:189):58: in function <init_worker_by_lua(nginx.conf:189):56>

....

And I create a new namespace to deploy kong, i think there are no distractions this time

And it doesn't have any other error detail above? It looks like there should be more detailed information about why https://github.com/Kong/kong/blob/3.5.0/kong/cmd/utils/inject_confs.lua#L28 failed.

That code is broadly concerned with initializing some files Kong creates on start, but I can't think of obvious reasons that would fail or be tied to an upgrade, as it's created fresh for each container replica. The chart changes related to that were to create a dedicated EmptyDir volume when we started using read-only root filesystems, but your Deployment isn't missing that and isn't trying to use some other prefix location, so I wouldn't expect any issues there.

You could file an upstream issue in https://github.com/Kong/kong to try and see if they can determine why the prefix initializer is reporting an error but no details. but if you are indeed able to clear the issue by starting from a fresh install that's probably quicker.

@NICK-DUAN
Copy link
Author

after add args, run helm -n kong install kong kong/kong -f ./charts/config.yaml --debug i got this in proxy container(args=["kong", "docker-start", "--vv"]):

Error: 
/usr/local/share/lua/5.1/kong/cmd/utils/inject_confs.lua:28: 
stack traceback:
	[C]: in function 'assert'
	/usr/local/share/lua/5.1/kong/cmd/utils/inject_confs.lua:28: in function 'load_conf'
	/usr/local/share/lua/5.1/kong/cmd/utils/inject_confs.lua:85: in function </usr/local/share/lua/5.1/kong/cmd/utils/inject_confs.lua:84>
	[C]: in function 'xpcall'
	/usr/local/bin/kong:99: in function 'file_gen'
	init_worker_by_lua(nginx.conf:189):51: in function <init_worker_by_lua(nginx.conf:189):49>
	[C]: in function 'xpcall'
	init_worker_by_lua(nginx.conf:189):58: in function <init_worker_by_lua(nginx.conf:189):56>

....
And I create a new namespace to deploy kong, i think there are no distractions this time

And it doesn't have any other error detail above? It looks like there should be more detailed information about why https://github.com/Kong/kong/blob/3.5.0/kong/cmd/utils/inject_confs.lua#L28 failed.

That code is broadly concerned with initializing some files Kong creates on start, but I can't think of obvious reasons that would fail or be tied to an upgrade, as it's created fresh for each container replica. The chart changes related to that were to create a dedicated EmptyDir volume when we started using read-only root filesystems, but your Deployment isn't missing that and isn't trying to use some other prefix location, so I wouldn't expect any issues there.

You could file an upstream issue in https://github.com/Kong/kong to try and see if they can determine why the prefix initializer is reporting an error but no details. but if you are indeed able to clear the issue by starting from a fresh install that's probably quicker.

thanks, I already created an new issue: Kong/kong#12418

@NICK-DUAN
Copy link
Author

And anybody knows why ingress-controller container get this.

time="2024-01-12T10:51:41Z" level=info msg="Retrying kong admin api client call after error" error="making HTTP request: Get \"http://localhost:8001/\": dial tcp [::1]:8001: connect: connection refused" logger=setup retries=0/60

In order to avoid other effect, I uninstall others kong charts, only with this custom-values:

env:
  database: "off"
  admin_listen: "127.0.0.1:8001"
  log_level: debug
admin:
  enabled: "true"
  http:
    enabled: "true"
  tls:
    enabled: "false"

but looks like no change there

@NICK-DUAN
Copy link
Author

I found this issue: #526, And set envs as @brennoo said, but it's useless for me.
image

@brennoo
Copy link

brennoo commented Jan 25, 2024

hey @NICK-DUAN,

It seems you need to set admin_listen port to 8444, check the admin port here #983 (comment)

@NICK-DUAN
Copy link
Author

NICK-DUAN commented Jan 25, 2024

hey @NICK-DUAN,

It seems you need to set admin_listen port to 8444, check the admin port here #983 (comment)

I install kong/kong, and I get a service that port is 8001, it seems right

# Source: kong/templates/service-kong-admin.yaml
apiVersion: v1
kind: Service
metadata:
  name: kong-kong-admin
  namespace: kong
  labels:
    app.kubernetes.io/name: kong
    helm.sh/chart: kong-2.33.3
    app.kubernetes.io/instance: "kong"
    app.kubernetes.io/managed-by: "Helm"
    app.kubernetes.io/version: "3.5"
spec:
  type: NodePort
  ports:
  - name: kong-admin
    port: 8001
    targetPort: 8001
    protocol: TCP
  selector:
    app.kubernetes.io/name: kong
    app.kubernetes.io/component: app
    app.kubernetes.io/instance: "kong"

And I have try this: #983 (comment), with this custom-values:

env:
  database: "off"
  admin_listen: "127.0.0.1:8444 ssl"
  log_level: debug
ingressController:
  env:
    kong_admin_init_retries: 5
    kong_admin_init_retry_delay: "20s"
admin:
  enabled: true
  http:
    ebabled: false
  tls:
    ebabled: true

change the admin port and still retry and resused..., it doesn't help me.

time="2024-01-25T11:51:08Z" level=info msg="diagnostics server disabled"
time="2024-01-25T11:51:08Z" level=info msg="starting controller manager" commit=956f457aafc2a07910bfa3c496e549493940e1e1 logger=setup release=2.12.3 repo="https://github.com/Kong/kubernetes-ingress-controller.git"
time="2024-01-25T11:51:08Z" level=info msg="the ingress class name has been set" logger=setup value=kong
time="2024-01-25T11:51:08Z" level=info msg="getting enabled options and features" logger=setup
time="2024-01-25T11:51:08Z" level=info msg="getting the kubernetes client configuration" logger=setup
W0125 11:51:08.496959       1 client_config.go:618] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
time="2024-01-25T11:51:08Z" level=info msg="starting standalone health check server" logger=setup
time="2024-01-25T11:51:08Z" level=info msg="getting the kong admin api client configuration" logger=setup
time="2024-01-25T11:51:08Z" level=info msg="Retrying kong admin api client call after error" error="making HTTP request: Get \"https://localhost:8444/\": dial tcp [::1]:8444: connect: connection refused" logger=setup retries=0/5
time="2024-01-25T11:51:28Z" level=info msg="Retrying kong admin api client call after error" error="making HTTP request: Get \"https://localhost:8444/\": dial tcp [::1]:8444: connect: connection refused" logger=setup retries=1/5
time="2024-01-25T11:51:48Z" level=info msg="Retrying kong admin api client call after error" error="making HTTP request: Get \"https://localhost:8444/\": dial tcp [::1]:8444: connect: connection refused" logger=setup retries=2/5
time="2024-01-25T11:52:08Z" level=info msg="Retrying kong admin api client call after error" error="making HTTP request: Get \"https://localhost:8444/\": dial tcp [::1]:8444: connect: connection refused" logger=setup retries=3/5
time="2024-01-25T11:52:28Z" level=info msg="Retrying kong admin api client call after error" error="making HTTP request: Get \"https://localhost:8444/\": dial tcp [::1]:8444: connect: connection refused" logger=setup retries=4/5
Error: could not retrieve Kong admin root(s): making HTTP request: Get "https://localhost:8444/": dial tcp [::1]:8444: connect: connection refused

@NICK-DUAN
Copy link
Author

new custom-values.yaml

proxy:
  type: ClusterIP
env:
  admin_listen: "127.0.0.1:8444 ssl"
ingressController:
  env:
    kong_admin_init_retries: 5
    kong_admin_init_retry_delay: "20s"
  ingressClass: "testkong"
admin:
  enabled: true
  tls:
    enabled: true
  type: ClusterIP
containerSecurityContext: {}
replicaCount: 1
manager:
  enabled: false
portal:
  enabled: false
portalapi:
  enabled: false
cluster:
  enabled: false
clustertelemetry:
  enabled: false

@NICK-DUAN
Copy link
Author

then I delete securityContext, it looks a little better, but another problem is comming:

time="2024-02-05T02:53:07Z" level=info msg="Starting workers" logger=controllers.IngressClassParameters worker count=1
time="2024-02-05T02:53:07Z" level=info msg="Starting workers" logger=controllers.Dynamic/HTTPRoute worker count=1
time="2024-02-05T02:53:07Z" level=info msg="Starting workers" logger=controllers.Dynamic/KnativeV1Alpha1/Ingress worker count=1
time="2024-02-05T02:53:07Z" level=info msg="Starting workers" logger=controllers.KongConsumerGroup worker count=1
time="2024-02-05T02:53:07Z" level=info msg="Starting workers" logger=controllers.KongConsumer worker count=1
time="2024-02-05T02:53:07Z" level=info msg="Starting workers" logger=controllers.Dynamic/Gateway worker count=1
time="2024-02-05T02:53:07Z" level=info msg="Starting workers" logger=controllers.KongClusterPlugin worker count=1
time="2024-02-05T02:53:07Z" level=info msg="Starting workers" logger=controllers.IngressClass.netv1 worker count=1
time="2024-02-05T02:53:07Z" level=info msg="Starting workers" logger=controllers.Ingress.netv1 worker count=1
time="2024-02-05T02:53:14Z" level=info msg="successfully synced configuration to Kong" update_strategy=InMemory url="https://localhost:8444"
[controller-runtime] log.SetLogger(...) was never called; logs will not be displayed.
Detected at:
	>  goroutine 144 [running]:
	>  runtime/debug.Stack()
	>  	/usr/local/go/src/runtime/debug/stack.go:24 +0x5e
	>  sigs.k8s.io/controller-runtime/pkg/log.eventuallyFulfillRoot()
	>  	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.2/pkg/log/log.go:60 +0xcd
	>  sigs.k8s.io/controller-runtime/pkg/log.(*delegatingLogSink).Error(0xc000135a00, {0x2194ec0, 0xc0028f51e0}, {0x1ed57a4, 0x21}, {0x0, 0x0, 0x0})
	>  	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.2/pkg/log/deleg.go:139 +0x5d
	>  github.com/go-logr/logr.Logger.Error({{0x21b5f28?, 0xc000135a00?}, 0x0?}, {0x2194ec0, 0xc0028f51e0}, {0x1ed57a4, 0x21}, {0x0, 0x0, 0x0})
	>  	/go/pkg/mod/github.com/go-logr/logr@v1.2.4/logr.go:299 +0xda
	>  sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind).Start.func1.1({0x21b2590?, 0xc0008e7a40?})
	>  	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.2/pkg/internal/source/kind.go:68 +0x1a9
	>  k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func2(0xc00281de00?, {0x21b2590?, 0xc0008e7a40?})
	>  	/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/loop.go:73 +0x52
	>  k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x21b2590, 0xc0008e7a40}, {0x21a8ef0?, 0xc0008e9040}, 0x1, 0x0, 0x0?)
	>  	/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/loop.go:74 +0x233
	>  k8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel({0x21b2590, 0xc0008e7a40}, 0x0?, 0x0?, 0x0?)
	>  	/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/wait/poll.go:33 +0x56
	>  sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind).Start.func1()
	>  	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.2/pkg/internal/source/kind.go:56 +0xee
	>  created by sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind).Start in goroutine 139
	>  	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.2/pkg/internal/source/kind.go:48 +0x1d8
time="2024-02-05T02:55:06Z" level=error msg="Could not wait for Cache to sync" error="failed to wait for DiscoveryV1EndpointSlice caches to sync: timed out waiting for cache to be synced for Kind *v1.EndpointSlice" logger=controllers.EndpointSlice
time="2024-02-05T02:55:06Z" level=info msg="Stopping and waiting for non leader election runnables"
time="2024-02-05T02:55:06Z" level=info msg="Stopping and waiting for leader election runnables"
time="2024-02-05T02:55:06Z" level=info msg="context done: shutting down the proxy update server" subsystem=dataplane-synchronizer
time="2024-02-05T02:55:06Z" level=info msg="Shutdown signal received, waiting for all workers to finish" logger=controllers.Ingress.netv1
time="2024-02-05T02:55:06Z" level=info msg="Shutdown signal received, waiting for all workers to finish" logger=controllers.IngressClass.netv1
time="2024-02-05T02:55:06Z" level=info msg="Shutdown signal received, waiting for all workers to finish" logger=controllers.KongClusterPlugin
time="2024-02-05T02:55:06Z" level=info msg="Shutdown signal received, waiting for all workers to finish" logger=controllers.Dynamic/Gateway
time="2024-02-05T02:55:06Z" level=info msg="Shutdown signal received, waiting for all workers to finish" logger=controllers.KongConsumer

now proxy container running well, but ingress-controller container restart all the time. anybody help?

@rainest
Copy link
Contributor

rainest commented Mar 4, 2024

You likely need to update CRDs: https://github.com/Kong/charts/blob/main/charts/kong/UPGRADE.md#updates-to-crds

The controller framework eventually aborts if you ask it to manage resources it cannot access, so missing CRDs or missing permissions will cause that sort of failure. Helm will keep the permissions updated so long as you use a chart version released after your KIC version, but it does not update CRDs.

@tu-doan
Copy link

tu-doan commented Mar 20, 2024

@NICK-DUAN
To config the admin listening address. you should update this config instead of env.

admin:
  enabled: true
  addresses:
    - 127.0.0.1

Clearly, here is my helm diff after adding the above code. And this solve my issue on both containers.
image

@NICK-DUAN
Copy link
Author

@NICK-DUAN To config the admin listening address. you should update this config instead of env.

admin:
  enabled: true
  addresses:
    - 127.0.0.1

Clearly, here is my helm diff after adding the above code. And this solve my issue on both containers. image

Thanks, I'll try

@NICK-DUAN
Copy link
Author

latest progress:

  1. got ingress-controller report logs:
2024-05-17T09:27:41Z	info	Diagnostics server disabled	{"v": 0}
2024-05-17T09:27:41Z	info	setup	Starting controller manager	{"v": 0, "release": "3.0.3", "repo": "https://github.com/Kong/kubernetes-ingress-controller.git", "commit": "cbd7866dbd449fd32b1c9f56cf747e68b577973d"}
2024-05-17T09:27:41Z	info	setup	The ingress class name has been set	{"v": 0, "value": "kong-test"}
2024-05-17T09:27:41Z	info	setup	Getting enabled options and features	{"v": 0}
2024-05-17T09:27:41Z	info	setup	Getting the kubernetes client configuration	{"v": 0}
W0517 09:27:41.877939       1 client_config.go:618] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
2024-05-17T09:27:41Z	info	setup	Starting standalone health check server	{"v": 0}
2024-05-17T09:27:41Z	info	setup	Getting the kong admin api client configuration	{"v": 0}
2024-05-17T09:27:41Z	info	setup	Retrying kong admin api client call after error	{"v": 0, "retries": "0/60", "error": "making HTTP request: Get \"http://localhost:8001/\": dial tcp [::1]:8001: connect: connection refused"}
2024-05-17T09:27:42Z	info	setup	Retrying kong admin api client call after error	{"v": 0, "retries": "1/60", "error": "making HTTP request: Get \"http://localhost:8001/\": dial tcp [::1]:8001: connect: connection refused"}
2024-05-17T09:27:43Z	info	setup	Retrying kong admin api client call after error	{"v": 0, "retries": "2/60", "error": "making HTTP request: Get \"http://localhost:8001/\": dial tcp [::1]:8001: connect: connection refused"}
2024-05-17T09:27:44Z	info	setup	Retrying kong admin api client call after error	{"v": 0, "retries": "3/60", "error": "making HTTP request: Get \"http://localhost:8001/\": dial tcp [::1]:8001: connect: connection refused"}
2024-05-17T09:27:45Z	info	setup	Retrying kong admin api client call after error	{"v": 0, "retries": "4/60", "error": "making HTTP request: Get \"http://localhost:8001/\": dial tcp [::1]:8001: connect: connection refused"}
2024-05-17T09:27:46Z	info	setup	Retrying kong admin api client call after error	{"v": 0, "retries": "5/60", "error": "making HTTP request: Get \"http://localhost:8001/\": dial tcp [::1]:8001: connect: connection refused"}
2024-05-17T09:27:47Z	info	setup	Retrying kong admin api client call after error	{"v": 0, "retries": "6/60", "error": "making HTTP request: Get \"http://localhost:8001/\": dial tcp [::1]:8001: connect: connection refused"}
2024-05-17T09:27:48Z	info	setup	Retrying kong admin api client call after error	{"v": 0, "retries": "7/60", "error": "making HTTP request: Get \"http://localhost:8001/\": dial tcp [::1]:8001: connect: connection refused"}
2024-05-17T09:27:49Z	info	setup	Retrying kong admin api client call after error	{"v": 0, "retries": "8/60", "error": "making HTTP request: Get \"http://localhost:8001/\": dial tcp [::1]:8001: connect: connection refused"}
2024-05-17T09:27:50Z	info	setup	Retrying kong admin api client call after error	{"v": 0, "retries": "9/60", "error": "making HTTP request: Get \"http://localhost:8001/\": dial tcp [::1]:8001: connect: connection refused"}
2024-05-17T09:27:51Z	info	setup	Retrying kong admin api client call after error	{"v": 0, "retries": "10/60", "error": "making HTTP request: Get \"http://localhost:8001/\": dial tcp [::1]:8001: connect: connection refused"}
  1. Then I thought that the proxy container exposed port 8444, bug proxy report logs
Error: 

  Run with --v (verbose) or --vv (debug) for more details
  1. Then I change proxy command and args:
command:
- ./docker-entrypoint.sh
args:
- kong
- docker-start
- --vv

And I got logs:

Error: 
/usr/local/share/lua/5.1/kong/cmd/utils/inject_confs.lua:28: 
stack traceback:
	[C]: in function 'assert'
	/usr/local/share/lua/5.1/kong/cmd/utils/inject_confs.lua:28: in function 'load_conf'
	/usr/local/share/lua/5.1/kong/cmd/utils/inject_confs.lua:85: in function </usr/local/share/lua/5.1/kong/cmd/utils/inject_confs.lua:84>
	[C]: in function 'xpcall'
	/usr/local/bin/kong:99: in function 'file_gen'
	init_worker_by_lua(nginx.conf:193):51: in function <init_worker_by_lua(nginx.conf:193):49>
	[C]: in function 'xpcall'
	init_worker_by_lua(nginx.conf:193):58: in function <init_worker_by_lua(nginx.conf:193):56>

So I think the problem is that the proxy container is not running properly. but I don't know how to fix it?

@NICK-DUAN
Copy link
Author

NICK-DUAN commented May 17, 2024

delete container securityContext, proxy got logs:

2024/05/17 10:24:56 [notice] 1281#0: *308 [lua] ready.lua:111: fn(): not ready for proxying: no configuration available (empty configuration present), client: 9.149.180.162, server: kong_status, request: "GET /status/ready HTTP/1.1", host: "9.149.182.119:8100"
2024/05/17 10:24:56 [debug] 1281#0: *308 [lua] init.lua:23: poll(): worker-events: emulate poll method
2024/05/17 10:24:58 [debug] 1281#0: *309 [lua] init.lua:23: poll(): worker-events: emulate poll method
2024/05/17 10:25:03 [debug] 1281#0: *303 [lua] init.lua:23: poll(): worker-events: emulate poll method
complete logs: ``` 2024/05/17 10:24:11 [warn] 1#0: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /kong_prefix/nginx.conf:7 nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /kong_prefix/nginx.conf:7 2024/05/17 10:24:11 [debug] 1#0: [lua] globalpatches.lua:10: installing the globalpatches 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:473: init(): [dns-client] (re)configuring dns client 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:478: init(): [dns-client] staleTtl = 4 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:482: init(): [dns-client] noSynchronisation = nil 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:501: init(): [dns-client] query order = LAST, SRV, A, AAAA, CNAME 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:541: init(): [dns-client] adding A-record from 'hosts' file: appshipyard-kong-7c8455cf5c-8bxbm = 9.149.182.119 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:556: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-localnet = [fe00::0] 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:556: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-allrouters = [fe00::2] 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:556: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-allnodes = [fe00::1] 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:556: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-mcastprefix = [fe00::0] 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:541: init(): [dns-client] adding A-record from 'hosts' file: localhost = 127.0.0.1 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:556: init(): [dns-client] adding AAAA-record from 'hosts' file: localhost = [::1] 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:556: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-localhost = [::1] 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:556: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-loopback = [::1] 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:565: init(): [dns-client] validTtl = nil 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:606: init(): [dns-client] nameserver 9.165.10.113 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:611: init(): [dns-client] attempts = 5 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:618: init(): [dns-client] no_random = true 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:627: init(): [dns-client] timeout = 2000 ms 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:631: init(): [dns-client] ndots = 5 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:633: init(): [dns-client] search = kong-test.svc.cluster.local, svc.cluster.local, cluster.local 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:646: init(): [dns-client] badTtl = 1 s 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:648: init(): [dns-client] emptyTtl = 30 s 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:473: init(): [dns-client] (re)configuring dns client 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:478: init(): [dns-client] staleTtl = 4 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:482: init(): [dns-client] noSynchronisation = true 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:501: init(): [dns-client] query order = LAST, SRV, A, CNAME 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:541: init(): [dns-client] adding A-record from 'hosts' file: appshipyard-kong-7c8455cf5c-8bxbm = 9.149.182.119 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:556: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-localnet = [fe00::0] 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:556: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-allrouters = [fe00::2] 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:556: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-allnodes = [fe00::1] 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:556: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-mcastprefix = [fe00::0] 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:541: init(): [dns-client] adding A-record from 'hosts' file: localhost = 127.0.0.1 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:556: init(): [dns-client] adding AAAA-record from 'hosts' file: localhost = [::1] 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:556: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-localhost = [::1] 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:556: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-loopback = [::1] 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:565: init(): [dns-client] validTtl = nil 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:606: init(): [dns-client] nameserver 9.165.10.113 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:611: init(): [dns-client] attempts = 5 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:618: init(): [dns-client] no_random = true 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:627: init(): [dns-client] timeout = 2000 ms 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:631: init(): [dns-client] ndots = 5 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:633: init(): [dns-client] search = kong-test.svc.cluster.local, svc.cluster.local, cluster.local 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:646: init(): [dns-client] badTtl = 1 s 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:648: init(): [dns-client] emptyTtl = 30 s 2024/05/17 10:24:11 [debug] 1#0: [lua] globalpatches.lua:437: randomseed(): seeding PRNG from OpenSSL RAND_bytes() 2024/05/17 10:24:11 [info] 1#0: [lua] node.lua:289: new(): kong node-id: 5ec7712d-1d9b-4d77-88b9-744859916b7f 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:473: init(): [dns-client] (re)configuring dns client 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:478: init(): [dns-client] staleTtl = 4 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:482: init(): [dns-client] noSynchronisation = true 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:501: init(): [dns-client] query order = LAST, SRV, A, CNAME 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:541: init(): [dns-client] adding A-record from 'hosts' file: appshipyard-kong-7c8455cf5c-8bxbm = 9.149.182.119 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:556: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-localnet = [fe00::0] 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:556: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-allrouters = [fe00::2] 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:556: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-allnodes = [fe00::1] 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:556: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-mcastprefix = [fe00::0] 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:541: init(): [dns-client] adding A-record from 'hosts' file: localhost = 127.0.0.1 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:556: init(): [dns-client] adding AAAA-record from 'hosts' file: localhost = [::1] 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:556: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-localhost = [::1] 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:556: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-loopback = [::1] 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:565: init(): [dns-client] validTtl = nil 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:606: init(): [dns-client] nameserver 9.165.10.113 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:611: init(): [dns-client] attempts = 5 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:618: init(): [dns-client] no_random = true 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:627: init(): [dns-client] timeout = 2000 ms 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:631: init(): [dns-client] ndots = 5 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:633: init(): [dns-client] search = kong-test.svc.cluster.local, svc.cluster.local, cluster.local 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:646: init(): [dns-client] badTtl = 1 s 2024/05/17 10:24:11 [debug] 1#0: [lua] client.lua:648: init(): [dns-client] emptyTtl = 30 s 2024/05/17 10:24:11 [debug] 1#0: [lua] vaults.lua:52: load_vault(): Loading vault: env 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: grpc-web 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: pre-function 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: post-function 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: azure-functions 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: zipkin 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: opentelemetry 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: coding-session-handler 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: coding-token-handler 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: super-key 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: shipyard-acl 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: signing 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: zhiyan-auth 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: request-rate-limit 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: http-logging 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: flexible-tag 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: qtap-auth 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: jwt 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:242: loader_fn(): Loading custom plugin entity: 'jwt.jwt_secrets' 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: acl 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:242: loader_fn(): Loading custom plugin entity: 'acl.acls' 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: correlation-id 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: cors 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: oauth2 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:242: loader_fn(): Loading custom plugin entity: 'oauth2.oauth2_credentials' 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:242: loader_fn(): Loading custom plugin entity: 'oauth2.oauth2_authorization_codes' 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:242: loader_fn(): Loading custom plugin entity: 'oauth2.oauth2_tokens' 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: tcp-log 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: udp-log 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: file-log 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: http-log 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: key-auth 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:242: loader_fn(): Loading custom plugin entity: 'key-auth.keyauth_credentials' 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: hmac-auth 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:242: loader_fn(): Loading custom plugin entity: 'hmac-auth.hmacauth_credentials' 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: basic-auth 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:242: loader_fn(): Loading custom plugin entity: 'basic-auth.basicauth_credentials' 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: ip-restriction 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: request-transformer 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: response-transformer 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: request-size-limiting 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: rate-limiting 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:242: loader_fn(): Loading custom plugin entity: 'rate-limiting.ratelimiting_metrics' 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: response-ratelimiting 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: syslog 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: loggly 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: datadog 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: ldap-auth 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: statsd 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: bot-detection 2024/05/17 10:24:11 [debug] 1#0: [lua] init.lua:32: [lua-resty-luasocket] set CAfile-default to: '/etc/ssl/certs/ca-certificates.crt' 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: aws-lambda 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: request-termination 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: prometheus 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: proxy-cache 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: session 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:242: loader_fn(): Loading custom plugin entity: 'session.sessions' 2024/05/17 10:24:11 [debug] 1#0: [lua] openssl.lua:5: [acme] using ffi, OpenSSL version linked: 30100040 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: acme 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:242: loader_fn(): Loading custom plugin entity: 'acme.acme_storage' 2024/05/17 10:24:11 [debug] 1#0: [lua] plugins.lua:284: load_plugin(): Loading plugin: grpc-gateway 2024/05/17 10:24:12 [notice] 1#0: [lua] init.lua:775: init(): [request-debug] token for request debugging: 053fb794-d621-4fe7-ad79-d773c725d882 2024/05/17 10:24:12 [notice] 1#0: using the "epoll" event method 2024/05/17 10:24:12 [notice] 1#0: openresty/1.21.4.2 2024/05/17 10:24:12 [notice] 1#0: OS: Linux 5.4.119-1-tlinux4-0009-eks 2024/05/17 10:24:12 [notice] 1#0: getrlimit(RLIMIT_NOFILE): 1048576:1048576 2024/05/17 10:24:12 [notice] 1#0: start worker processes 2024/05/17 10:24:12 [notice] 1#0: start worker process 1280 2024/05/17 10:24:12 [notice] 1#0: start worker process 1281 2024/05/17 10:24:12 [debug] 1281#0: *1 [lua] globalpatches.lua:437: randomseed(): seeding PRNG from OpenSSL RAND_bytes() 2024/05/17 10:24:12 [debug] 1280#0: *2 [lua] globalpatches.lua:437: randomseed(): seeding PRNG from OpenSSL RAND_bytes() 2024/05/17 10:24:13 [debug] 1280#0: *2 [lua] init.lua:233: invalidate_local(): [DB cache] invalidating (local): 'admin:gui:kconfig' 2024/05/17 10:24:13 [debug] 1281#0: *1 [lua] init.lua:233: invalidate_local(): [DB cache] invalidating (local): 'admin:gui:kconfig' 2024/05/17 10:24:13 [notice] 1280#0: *2 [lua] globalpatches.lua:73: sleep(): executing a blocking 'sleep' (0.001 seconds), context: init_worker_by_lua* 2024/05/17 10:24:13 [notice] 1280#0: *2 [lua] globalpatches.lua:73: sleep(): executing a blocking 'sleep' (0.002 seconds), context: init_worker_by_lua* 2024/05/17 10:24:13 [notice] 1280#0: *2 [lua] globalpatches.lua:73: sleep(): executing a blocking 'sleep' (0.004 seconds), context: init_worker_by_lua* 2024/05/17 10:24:13 [notice] 1280#0: *2 [lua] globalpatches.lua:73: sleep(): executing a blocking 'sleep' (0.008 seconds), context: init_worker_by_lua* 2024/05/17 10:24:13 [notice] 1281#0: *1 [lua] init.lua:259: purge(): [DB cache] purging (local) cache, context: init_worker_by_lua* 2024/05/17 10:24:13 [notice] 1281#0: *1 [lua] init.lua:259: purge(): [DB cache] purging (local) cache, context: init_worker_by_lua* 2024/05/17 10:24:13 [debug] 1281#0: *1 [lua] counter.lua:70: new(): start timer for shdict kong on worker 1 2024/05/17 10:24:13 [info] 1281#0: *1 [kong] handler.lua:87 [acme] acme renew timer started on worker 1, context: init_worker_by_lua* 2024/05/17 10:24:13 [debug] 1281#0: *1 [lua] counter.lua:70: new(): start timer for shdict prometheus_metrics on worker 1 2024/05/17 10:24:13 [debug] 1280#0: *2 [lua] counter.lua:70: new(): start timer for shdict kong on worker 0 2024/05/17 10:24:13 [info] 1280#0: *2 [kong] handler.lua:87 [acme] acme renew timer started on worker 0, context: init_worker_by_lua* 2024/05/17 10:24:13 [debug] 1280#0: *2 [lua] counter.lua:70: new(): start timer for shdict prometheus_metrics on worker 0 2024/05/17 10:24:13 [debug] 1281#0: *6 [lua] client.lua:473: init(): [dns-client] (re)configuring dns client 2024/05/17 10:24:13 [debug] 1281#0: *6 [lua] client.lua:478: init(): [dns-client] staleTtl = 4 2024/05/17 10:24:13 [debug] 1281#0: *6 [lua] client.lua:482: init(): [dns-client] noSynchronisation = true 2024/05/17 10:24:13 [debug] 1281#0: *6 [lua] client.lua:501: init(): [dns-client] query order = LAST, SRV, A, CNAME 2024/05/17 10:24:13 [debug] 1281#0: *6 [lua] client.lua:541: init(): [dns-client] adding A-record from 'hosts' file: appshipyard-kong-7c8455cf5c-8bxbm = 9.149.182.119 2024/05/17 10:24:13 [debug] 1281#0: *6 [lua] client.lua:556: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-localnet = [fe00::0] 2024/05/17 10:24:13 [debug] 1281#0: *6 [lua] client.lua:556: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-allrouters = [fe00::2] 2024/05/17 10:24:13 [debug] 1281#0: *6 [lua] client.lua:556: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-allnodes = [fe00::1] 2024/05/17 10:24:13 [debug] 1281#0: *6 [lua] client.lua:556: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-mcastprefix = [fe00::0] 2024/05/17 10:24:13 [debug] 1281#0: *6 [lua] client.lua:541: init(): [dns-client] adding A-record from 'hosts' file: localhost = 127.0.0.1 2024/05/17 10:24:13 [debug] 1281#0: *6 [lua] client.lua:556: init(): [dns-client] adding AAAA-record from 'hosts' file: localhost = [::1] 2024/05/17 10:24:13 [debug] 1281#0: *6 [lua] client.lua:556: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-localhost = [::1] 2024/05/17 10:24:13 [debug] 1281#0: *6 [lua] client.lua:556: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-loopback = [::1] 2024/05/17 10:24:13 [debug] 1281#0: *6 [lua] client.lua:565: init(): [dns-client] validTtl = nil 2024/05/17 10:24:13 [debug] 1281#0: *6 [lua] client.lua:606: init(): [dns-client] nameserver 9.165.10.113 2024/05/17 10:24:13 [debug] 1281#0: *6 [lua] client.lua:611: init(): [dns-client] attempts = 5 2024/05/17 10:24:13 [debug] 1281#0: *6 [lua] client.lua:618: init(): [dns-client] no_random = true 2024/05/17 10:24:13 [debug] 1281#0: *6 [lua] client.lua:627: init(): [dns-client] timeout = 2000 ms 2024/05/17 10:24:13 [debug] 1281#0: *6 [lua] client.lua:631: init(): [dns-client] ndots = 5 2024/05/17 10:24:13 [debug] 1281#0: *6 [lua] client.lua:633: init(): [dns-client] search = kong-test.svc.cluster.local, svc.cluster.local, cluster.local 2024/05/17 10:24:13 [debug] 1281#0: *6 [lua] client.lua:646: init(): [dns-client] badTtl = 1 s 2024/05/17 10:24:13 [debug] 1281#0: *6 [lua] client.lua:648: init(): [dns-client] emptyTtl = 30 s 2024/05/17 10:24:13 [debug] 1281#0: *6 [lua] upstreams.lua:86: loading upstreams dict into memory 2024/05/17 10:24:13 [debug] 1281#0: *6 [lua] upstreams.lua:108: no upstreams were specified 2024/05/17 10:24:13 [debug] 1281#0: *6 [lua] upstreams.lua:270: update_balancer_state(): update proxy state timer scheduled 2024/05/17 10:24:13 [debug] 1280#0: *153 [lua] client.lua:473: init(): [dns-client] (re)configuring dns client 2024/05/17 10:24:13 [debug] 1280#0: *153 [lua] client.lua:478: init(): [dns-client] staleTtl = 4 2024/05/17 10:24:13 [debug] 1280#0: *153 [lua] client.lua:482: init(): [dns-client] noSynchronisation = true 2024/05/17 10:24:13 [debug] 1280#0: *153 [lua] client.lua:501: init(): [dns-client] query order = LAST, SRV, A, CNAME 2024/05/17 10:24:13 [debug] 1280#0: *153 [lua] client.lua:541: init(): [dns-client] adding A-record from 'hosts' file: appshipyard-kong-7c8455cf5c-8bxbm = 9.149.182.119 2024/05/17 10:24:13 [debug] 1280#0: *153 [lua] client.lua:556: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-localnet = [fe00::0] 2024/05/17 10:24:13 [debug] 1280#0: *153 [lua] client.lua:556: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-allrouters = [fe00::2] 2024/05/17 10:24:13 [debug] 1280#0: *153 [lua] client.lua:556: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-allnodes = [fe00::1] 2024/05/17 10:24:13 [debug] 1280#0: *153 [lua] client.lua:556: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-mcastprefix = [fe00::0] 2024/05/17 10:24:13 [debug] 1280#0: *153 [lua] client.lua:541: init(): [dns-client] adding A-record from 'hosts' file: localhost = 127.0.0.1 2024/05/17 10:24:13 [debug] 1280#0: *153 [lua] client.lua:556: init(): [dns-client] adding AAAA-record from 'hosts' file: localhost = [::1] 2024/05/17 10:24:13 [debug] 1280#0: *153 [lua] client.lua:556: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-localhost = [::1] 2024/05/17 10:24:13 [debug] 1280#0: *153 [lua] client.lua:556: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-loopback = [::1] 2024/05/17 10:24:13 [debug] 1280#0: *153 [lua] client.lua:565: init(): [dns-client] validTtl = nil 2024/05/17 10:24:13 [debug] 1280#0: *153 [lua] client.lua:606: init(): [dns-client] nameserver 9.165.10.113 2024/05/17 10:24:13 [debug] 1280#0: *153 [lua] client.lua:611: init(): [dns-client] attempts = 5 2024/05/17 10:24:13 [debug] 1280#0: *153 [lua] client.lua:618: init(): [dns-client] no_random = true 2024/05/17 10:24:13 [debug] 1280#0: *153 [lua] client.lua:627: init(): [dns-client] timeout = 2000 ms 2024/05/17 10:24:13 [debug] 1280#0: *153 [lua] client.lua:631: init(): [dns-client] ndots = 5 2024/05/17 10:24:13 [debug] 1280#0: *153 [lua] client.lua:633: init(): [dns-client] search = kong-test.svc.cluster.local, svc.cluster.local, cluster.local 2024/05/17 10:24:13 [debug] 1280#0: *153 [lua] client.lua:646: init(): [dns-client] badTtl = 1 s 2024/05/17 10:24:13 [debug] 1280#0: *153 [lua] client.lua:648: init(): [dns-client] emptyTtl = 30 s 2024/05/17 10:24:13 [debug] 1280#0: *153 [lua] upstreams.lua:270: update_balancer_state(): update proxy state timer scheduled 2024/05/17 10:24:13 [debug] 1281#0: *4 [lua] worker.lua:147: communicate(): 1 on (unix:/kong_prefix/worker_events.sock) is ready 2024/05/17 10:24:13 [debug] 1280#0: *151 [lua] worker.lua:147: communicate(): 0 on (unix:/kong_prefix/worker_events.sock) is ready 2024/05/17 10:24:13 [debug] 1280#0: *149 [lua] broker.lua:73: broadcast_events(): event published to 2 workers 2024/05/17 10:24:13 [debug] 1280#0: *149 [lua] broker.lua:73: broadcast_events(): event published to 2 workers 2024/05/17 10:24:13 [debug] 1280#0: *149 [lua] broker.lua:73: broadcast_events(): event published to 2 workers 2024/05/17 10:24:13 [debug] 1280#0: *296 [lua] broker.lua:73: broadcast_events(): event published to 2 workers 2024/05/17 10:24:13 [debug] 1280#0: *151 [lua] callback.lua:114: do_event(): worker-events: handling event; source=mlcache, event=mlcache:invalidations:kong_db_cache, wid=1 2024/05/17 10:24:13 [debug] 1280#0: *151 [lua] callback.lua:114: do_event(): worker-events: handling event; source=mlcache, event=mlcache:purge:kong_core_db_cache, wid=1 2024/05/17 10:24:13 [debug] 1280#0: *151 [lua] callback.lua:114: do_event(): worker-events: handling event; source=mlcache, event=mlcache:purge:kong_db_cache, wid=1 2024/05/17 10:24:13 [debug] 1281#0: *4 [lua] callback.lua:114: do_event(): worker-events: handling event; source=mlcache, event=mlcache:invalidations:kong_db_cache, wid=1 2024/05/17 10:24:13 [debug] 1280#0: *151 [lua] callback.lua:114: do_event(): worker-events: handling event; source=mlcache, event=mlcache:invalidations:kong_db_cache, wid=0 2024/05/17 10:24:13 [debug] 1281#0: *4 [lua] callback.lua:114: do_event(): worker-events: handling event; source=mlcache, event=mlcache:purge:kong_core_db_cache, wid=1 2024/05/17 10:24:13 [debug] 1281#0: *4 [lua] callback.lua:114: do_event(): worker-events: handling event; source=mlcache, event=mlcache:purge:kong_db_cache, wid=1 2024/05/17 10:24:13 [debug] 1281#0: *4 [lua] callback.lua:114: do_event(): worker-events: handling event; source=mlcache, event=mlcache:invalidations:kong_db_cache, wid=0 2024/05/17 10:24:13 [debug] 1281#0: *3 [lua] super.lua:161: scaling_log(): [timer-ng] load_avg: 0.0069444444444444, runable_jobs_avg: 1, alive_threads_avg: 144 2024/05/17 10:24:13 [debug] 1280#0: *150 [lua] super.lua:161: scaling_log(): [timer-ng] load_avg: 0.0069444444444444, runable_jobs_avg: 1, alive_threads_avg: 144 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:24: Loading Status API endpoints 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: grpc-web 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: pre-function 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: post-function 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: azure-functions 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: zipkin 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: opentelemetry 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: coding-session-handler 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: coding-token-handler 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: super-key 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: shipyard-acl 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: signing 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: zhiyan-auth 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: request-rate-limit 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: http-logging 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: flexible-tag 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: qtap-auth 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: jwt 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: acl 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: correlation-id 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: cors 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: oauth2 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: tcp-log 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: udp-log 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: file-log 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: http-log 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: key-auth 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: hmac-auth 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: basic-auth 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: ip-restriction 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: request-transformer 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: response-transformer 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: request-size-limiting 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: rate-limiting 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: response-ratelimiting 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: syslog 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: loggly 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: datadog 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: ldap-auth 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: statsd 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: bot-detection 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: aws-lambda 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: request-termination 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:65: Loading Status API endpoints for plugin: prometheus 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: proxy-cache 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: session 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: acme 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:68: No Status API endpoints loaded for plugin: grpc-gateway 2024/05/17 10:24:16 [notice] 1281#0: *297 [lua] ready.lua:111: fn(): not ready for proxying: no configuration available (empty configuration present), client: 9.149.180.162, server: kong_status, request: "GET /status/ready HTTP/1.1", host: "9.149.182.119:8100" 2024/05/17 10:24:16 [debug] 1281#0: *297 [lua] init.lua:23: poll(): worker-events: emulate poll method 2024/05/17 10:24:16 [notice] 1281#0: *298 [lua] ready.lua:111: fn(): not ready for proxying: no configuration available (empty configuration present), client: 9.149.180.162, server: kong_status, request: "GET /status/ready HTTP/1.1", host: "9.149.182.119:8100" 2024/05/17 10:24:16 [debug] 1281#0: *298 [lua] init.lua:23: poll(): worker-events: emulate poll method 2024/05/17 10:24:18 [debug] 1281#0: *299 [lua] init.lua:23: poll(): worker-events: emulate poll method 2024/05/17 10:24:26 [notice] 1281#0: *300 [lua] ready.lua:111: fn(): not ready for proxying: no configuration available (empty configuration present), client: 9.149.180.162, server: kong_status, request: "GET /status/ready HTTP/1.1", host: "9.149.182.119:8100" 2024/05/17 10:24:26 [debug] 1281#0: *300 [lua] init.lua:23: poll(): worker-events: emulate poll method 2024/05/17 10:24:26 [notice] 1281#0: *301 [lua] ready.lua:111: fn(): not ready for proxying: no configuration available (empty configuration present), client: 9.149.180.162, server: kong_status, request: "GET /status/ready HTTP/1.1", host: "9.149.182.119:8100" 2024/05/17 10:24:26 [debug] 1281#0: *301 [lua] init.lua:23: poll(): worker-events: emulate poll method 2024/05/17 10:24:28 [debug] 1281#0: *302 [lua] init.lua:23: poll(): worker-events: emulate poll method 2024/05/17 10:24:32 [debug] 1281#0: *303 [lua] init.lua:23: poll(): worker-events: emulate poll method 2024/05/17 10:24:36 [notice] 1281#0: *304 [lua] ready.lua:111: fn(): not ready for proxying: no configuration available (empty configuration present), client: 9.149.180.162, server: kong_status, request: "GET /status/ready HTTP/1.1", host: "9.149.182.119:8100" 2024/05/17 10:24:36 [debug] 1281#0: *304 [lua] init.lua:23: poll(): worker-events: emulate poll method 2024/05/17 10:24:38 [debug] 1281#0: *305 [lua] init.lua:23: poll(): worker-events: emulate poll method 2024/05/17 10:24:46 [notice] 1281#0: *306 [lua] ready.lua:111: fn(): not ready for proxying: no configuration available (empty configuration present), client: 9.149.180.162, server: kong_status, request: "GET /status/ready HTTP/1.1", host: "9.149.182.119:8100" 2024/05/17 10:24:46 [debug] 1281#0: *306 [lua] init.lua:23: poll(): worker-events: emulate poll method 2024/05/17 10:24:48 [debug] 1281#0: *307 [lua] init.lua:23: poll(): worker-events: emulate poll method 2024/05/17 10:24:56 [notice] 1281#0: *308 [lua] ready.lua:111: fn(): not ready for proxying: no configuration available (empty configuration present), client: 9.149.180.162, server: kong_status, request: "GET /status/ready HTTP/1.1", host: "9.149.182.119:8100" 2024/05/17 10:24:56 [debug] 1281#0: *308 [lua] init.lua:23: poll(): worker-events: emulate poll method 2024/05/17 10:24:58 [debug] 1281#0: *309 [lua] init.lua:23: poll(): worker-events: emulate poll method 2024/05/17 10:25:03 [debug] 1281#0: *303 [lua] init.lua:23: poll(): worker-events: emulate poll method ```

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants