Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[HELM]: Added checksum config annotation in stateful set for broker, controller and server #13059

Merged
merged 3 commits into from
May 16, 2024

Conversation

abhioncbr
Copy link
Contributor

As per the issue,

Added checksum/config annotation in broker, server and controller stateful set. Mre information about the annotation can be found here

Also, here is the output for the helm lint command

$ helm lint
==> Linting .
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, 0 chart(s) failed

And here is the output of the helm template command

helm template
---
# Source: pinot/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: release-name-pinot
  labels:
    helm.sh/chart: pinot-0.2.9-SNAPSHOT
    app: pinot
    release: release-name
    app.kubernetes.io/version: "1.0.0"
    app.kubernetes.io/managed-by: Helm
    heritage: Helm
---

apiVersion: v1
kind: ConfigMap
metadata:
  name: release-name-pinot-broker-config
data:
  pinot-broker.conf: |-
    pinot.broker.client.queryPort=8099
    pinot.broker.routing.table.builder.class=random
    pinot.set.instance.id.to.hostname=true
    pinot.query.server.port=7321
    pinot.query.runner.port=7732
---

apiVersion: v1
kind: ConfigMap
metadata:
  name: release-name-pinot-controller-config
data:
  pinot-controller.conf: |-
    controller.helix.cluster.name=pinot-quickstart
    controller.port=9000
    controller.data.dir=/var/pinot/controller/data
    controller.zk.str=release-name-zookeeper:2181
    pinot.set.instance.id.to.hostname=true
    controller.task.scheduler.enabled=true
---
# Source: pinot/templates/minion-stateless/configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: release-name-pinot-minion-stateless-config
data:
  pinot-minion-stateless.conf: |-
    pinot.minion.port=9514
    dataDir=/var/pinot/minion/data
    pinot.set.instance.id.to.hostname=true
---
# Source: pinot/templates/server/configmap.yaml


apiVersion: v1
kind: ConfigMap
metadata:
  name: release-name-pinot-server-config
data:
  pinot-server.conf: |-
    pinot.server.netty.port=8098
    pinot.server.adminapi.port=8097
    pinot.server.instance.dataDir=/var/pinot/server/data/index
    pinot.server.instance.segmentTarDir=/var/pinot/server/data/segment
    pinot.set.instance.id.to.hostname=true
    pinot.server.instance.realtime.alloc.offheap=true
    pinot.query.server.port=7321
    pinot.query.runner.port=7732
---
# Source: pinot/charts/zookeeper/templates/svc-headless.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-zookeeper-headless
  namespace: consumer
  labels:
    app.kubernetes.io/name: zookeeper
    helm.sh/chart: zookeeper-7.0.0
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: zookeeper
spec:
  type: ClusterIP
  clusterIP: None
  publishNotReadyAddresses: true
  ports:
    
    - name: tcp-client
      port: 2181
      targetPort: client
    
    
    - name: follower
      port: 2888
      targetPort: follower
    - name: tcp-election
      port: 3888
      targetPort: election
  selector:
    app.kubernetes.io/name: zookeeper
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/component: zookeeper
---
# Source: pinot/charts/zookeeper/templates/svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-zookeeper
  namespace: consumer
  labels:
    app.kubernetes.io/name: zookeeper
    helm.sh/chart: zookeeper-7.0.0
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: zookeeper
spec:
  type: ClusterIP
  ports:
    
    - name: tcp-client
      port: 2181
      targetPort: client
      nodePort: null
    
    
    - name: follower
      port: 2888
      targetPort: follower
    - name: tcp-election
      port: 3888
      targetPort: election
  selector:
    app.kubernetes.io/name: zookeeper
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/component: zookeeper
---
# Source: pinot/templates/broker/service-external.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-pinot-broker-external
  annotations:
    {}
  labels:
    helm.sh/chart: pinot-0.2.9-SNAPSHOT
    app: pinot
    release: release-name
    app.kubernetes.io/version: "1.0.0"
    app.kubernetes.io/managed-by: Helm
    heritage: Helm
    component: broker
spec:
  type: LoadBalancer
  ports:
    - name: external-broker
      port: 8099
  selector:
    app: pinot
    release: release-name
    component: broker
---
# Source: pinot/templates/broker/service-headless.yaml


apiVersion: v1
kind: Service
metadata:
  name: release-name-pinot-broker-headless
  labels:
    helm.sh/chart: pinot-0.2.9-SNAPSHOT
    app: pinot
    release: release-name
    app.kubernetes.io/version: "1.0.0"
    app.kubernetes.io/managed-by: Helm
    heritage: Helm
    component: broker
spec:
  clusterIP: None
  ports:
    # [pod_name].[service_name].[namespace].svc.cluster.local
    - name: broker
      port: 8099
  selector:
    app: pinot
    release: release-name
    component: broker
---
# Source: pinot/templates/broker/service.yaml


apiVersion: v1
kind: Service
metadata:
  name: release-name-pinot-broker
  annotations:
    {}
  labels:
    helm.sh/chart: pinot-0.2.9-SNAPSHOT
    app: pinot
    release: release-name
    app.kubernetes.io/version: "1.0.0"
    app.kubernetes.io/managed-by: Helm
    heritage: Helm
    component: broker
spec:
  type: ClusterIP
  ports:
    # [pod_name].[service_name].[namespace].svc.cluster.local
    - name: broker
      port: 8099
  selector:
    app: pinot
    release: release-name
    component: broker
---
# Source: pinot/templates/controller/service-external.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-pinot-controller-external
  annotations:
    {}
  labels:
    helm.sh/chart: pinot-0.2.9-SNAPSHOT
    app: pinot
    release: release-name
    app.kubernetes.io/version: "1.0.0"
    app.kubernetes.io/managed-by: Helm
    heritage: Helm
    component: controller
spec:
  type: LoadBalancer
  ports:
    - name: external-controller
      port: 9000
  selector:
    app: pinot
    release: release-name
    component: controller
---
# Source: pinot/templates/controller/service-headless.yaml


apiVersion: v1
kind: Service
metadata:
  name: release-name-pinot-controller-headless
  labels:
    helm.sh/chart: pinot-0.2.9-SNAPSHOT
    app: pinot
    release: release-name
    app.kubernetes.io/version: "1.0.0"
    app.kubernetes.io/managed-by: Helm
    heritage: Helm
    component: controller
spec:
  clusterIP: None
  ports:
    # [pod_name].[service_name].[namespace].svc.cluster.local
    - name: controller
      port: 9000
  selector:
    app: pinot
    release: release-name
    component: controller
---
# Source: pinot/templates/controller/service.yaml


apiVersion: v1
kind: Service
metadata:
  name: release-name-pinot-controller
  annotations:
    {}
  labels:
    helm.sh/chart: pinot-0.2.9-SNAPSHOT
    app: pinot
    release: release-name
    app.kubernetes.io/version: "1.0.0"
    app.kubernetes.io/managed-by: Helm
    heritage: Helm
    component: controller
spec:
  type: ClusterIP
  ports:
    # [pod_name].[service_name].[namespace].svc.cluster.local
    - name: controller
      port: 9000
  selector:
    app: pinot
    release: release-name
    component: controller
---
# Source: pinot/templates/server/service-headless.yaml


apiVersion: v1
kind: Service
metadata:
  name: release-name-pinot-server-headless
  labels:
    helm.sh/chart: pinot-0.2.9-SNAPSHOT
    app: pinot
    release: release-name
    app.kubernetes.io/version: "1.0.0"
    app.kubernetes.io/managed-by: Helm
    heritage: Helm
    component: server
spec:
  clusterIP: None
  ports:
    # [pod_name].[service_name].[namespace].svc.cluster.local
    - name: netty
      port: 8098
      protocol: TCP
    - name: admin
      port: 80
      targetPort: 8097
      protocol: TCP
  selector:
    app: pinot
    release: release-name
    component: server
---
# Source: pinot/templates/server/service.yaml


apiVersion: v1
kind: Service
metadata:
  name: release-name-pinot-server
  annotations:
    {}
  labels:
    helm.sh/chart: pinot-0.2.9-SNAPSHOT
    app: pinot
    release: release-name
    app.kubernetes.io/version: "1.0.0"
    app.kubernetes.io/managed-by: Helm
    heritage: Helm
    component: server
spec:
  type: ClusterIP
  ports:
    # [pod_name].[service_name].[namespace].svc.cluster.local
    - name: netty
      port: 8098
      protocol: TCP
    - name: admin
      port: 80
      targetPort: 8097
      protocol: TCP
  selector:
    app: pinot
    release: release-name
    component: server
---
# Source: pinot/templates/minion-stateless/deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-pinot-minion-stateless
  labels:
    helm.sh/chart: pinot-0.2.9-SNAPSHOT
    app: pinot
    release: release-name
    app.kubernetes.io/version: "1.0.0"
    app.kubernetes.io/managed-by: Helm
    heritage: Helm
    component: minion-stateless
spec:
  selector:
    matchLabels:
      app: pinot
      release: release-name
      component: minion-stateless
  replicas: 1
  template:
    metadata:
      labels:
        helm.sh/chart: pinot-0.2.9-SNAPSHOT
        app: pinot
        release: release-name
        app.kubernetes.io/version: "1.0.0"
        app.kubernetes.io/managed-by: Helm
        heritage: Helm
        component: minion-stateless
      annotations:
        {}
    spec:
      terminationGracePeriodSeconds: 30
      serviceAccountName: release-name-pinot
      securityContext:
        {}
      nodeSelector:
        {}
      affinity:
        {}
      tolerations:
        []
      containers:
      - name: minion-stateless
        securityContext:
          {}
        image: "apachepinot/pinot:latest"
        imagePullPolicy: Always
        args: [
          "StartMinion",
          "-clusterName", "pinot-quickstart",
          "-zkAddress", "release-name-zookeeper:2181",
          "-configFileName", "/var/pinot/minion/config/pinot-minion-stateless.conf"
        ]
        env:
          - name: JAVA_OPTS
            value: "-XX:ActiveProcessorCount=2 -Xms256M -Xmx1G -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -Xlog:gc*:file=/opt/pinot/gc-pinot-minion.log -Dlog4j2.configurationFile=/opt/pinot/etc/conf/pinot-minion-log4j2.xml -Dplugins.dir=/opt/pinot/plugins"
          - name: LOG4J_CONSOLE_LEVEL
            value: info
        envFrom:
          []
        ports:
          - containerPort: 9514
            protocol: TCP
            name: minion
        livenessProbe:
          initialDelaySeconds: 60
          periodSeconds: 10
          httpGet:
            path: /health
            port: 9514
        readinessProbe:
          initialDelaySeconds: 60
          periodSeconds: 10
          httpGet:
            path: /health
            port: 9514
        volumeMounts:
          - name: config
            mountPath: /var/pinot/minion/config
        resources:
            requests:
              memory: 1.25Gi
      restartPolicy: Always
      volumes:
        - name: config
          configMap:
            name: release-name-pinot-minion-stateless-config
        - name: data
          emptyDir: {}
---
# Source: pinot/charts/zookeeper/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: release-name-zookeeper
  namespace: consumer
  labels:
    app.kubernetes.io/name: zookeeper
    helm.sh/chart: zookeeper-7.0.0
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: zookeeper
    role: zookeeper
spec:
  serviceName: release-name-zookeeper-headless
  replicas: 1
  podManagementPolicy: Parallel
  updateStrategy:
    type: RollingUpdate
  selector:
    matchLabels:
      app.kubernetes.io/name: zookeeper
      app.kubernetes.io/instance: release-name
      app.kubernetes.io/component: zookeeper
  template:
    metadata:
      name: release-name-zookeeper
      labels:
        app.kubernetes.io/name: zookeeper
        helm.sh/chart: zookeeper-7.0.0
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: zookeeper
    spec:
      
      serviceAccountName: default
      securityContext:
        fsGroup: 1001
      affinity:
        podAffinity:
          
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - podAffinityTerm:
                labelSelector:
                  matchLabels:
                    app.kubernetes.io/name: zookeeper
                    app.kubernetes.io/instance: release-name
                    app.kubernetes.io/component: zookeeper
                namespaces:
                  - "consumer"
                topologyKey: kubernetes.io/hostname
              weight: 1
        nodeAffinity:
          
      containers:
        - name: zookeeper
          image: docker.io/bitnami/zookeeper:3.7.0-debian-10-r56
          imagePullPolicy: "IfNotPresent"
          securityContext:
            runAsUser: 1001
          command:
            - bash
            - -ec
            - |
                # Execute entrypoint as usual after obtaining ZOO_SERVER_ID
                # check ZOO_SERVER_ID in persistent volume via myid
                # if not present, set based on POD hostname
                if [[ -f "/bitnami/zookeeper/data/myid" ]]; then
                  export ZOO_SERVER_ID="$(cat /bitnami/zookeeper/data/myid)"
                else
                  HOSTNAME=`hostname -s`
                  if [[ $HOSTNAME =~ (.*)-([0-9]+)$ ]]; then
                    ORD=${BASH_REMATCH[2]}
                    export ZOO_SERVER_ID=$((ORD + 1 ))
                  else
                    echo "Failed to get index from hostname $HOST"
                    exit 1
                  fi
                fi
                exec /entrypoint.sh /run.sh
          resources:
            requests:
              cpu: 250m
              memory: 1.25Gi
          env:
            - name: ZOO_DATA_LOG_DIR
              value: ""
            - name: ZOO_PORT_NUMBER
              value: "2181"
            - name: ZOO_TICK_TIME
              value: "2000"
            - name: ZOO_INIT_LIMIT
              value: "10"
            - name: ZOO_SYNC_LIMIT
              value: "5"
            - name: ZOO_MAX_CLIENT_CNXNS
              value: "60"
            - name: ZOO_4LW_COMMANDS_WHITELIST
              value: "srvr, mntr, ruok"
            - name: ZOO_LISTEN_ALLIPS_ENABLED
              value: "no"
            - name: ZOO_AUTOPURGE_INTERVAL
              value: "1"
            - name: ZOO_AUTOPURGE_RETAIN_COUNT
              value: "5"
            - name: ZOO_MAX_SESSION_TIMEOUT
              value: "40000"
            - name: ZOO_SERVERS
              value: release-name-zookeeper-0.release-name-zookeeper-headless.consumer.svc.cluster.local:2888:3888::1 
            - name: ZOO_ENABLE_AUTH
              value: "no"
            - name: ZOO_HEAP_SIZE
              value: "1024"
            - name: ZOO_LOG_LEVEL
              value: "ERROR"
            - name: ALLOW_ANONYMOUS_LOGIN
              value: "yes"
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.name
          ports:
            - name: client
              containerPort: 2181
            - name: follower
              containerPort: 2888
            - name: election
              containerPort: 3888
          livenessProbe:
            exec:
              command: ['/bin/bash', '-c', 'echo "ruok" | timeout 2 nc -w 2 localhost 2181 | grep imok']
            initialDelaySeconds: 30
            periodSeconds: 10
            timeoutSeconds: 5
            successThreshold: 1
            failureThreshold: 6
          readinessProbe:
            exec:
              command: ['/bin/bash', '-c', 'echo "ruok" | timeout 2 nc -w 2 localhost 2181 | grep imok']
            initialDelaySeconds: 5
            periodSeconds: 10
            timeoutSeconds: 5
            successThreshold: 1
            failureThreshold: 6
          volumeMounts:
            - name: data
              mountPath: /bitnami/zookeeper
      volumes:
  volumeClaimTemplates:
    - metadata:
        name: data
        annotations:
      spec:
        accessModes:
          - "ReadWriteOnce"
        resources:
          requests:
            storage: "8Gi"
---
# Source: pinot/templates/broker/statefulset.yaml


apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: release-name-pinot-broker
  labels:
    helm.sh/chart: pinot-0.2.9-SNAPSHOT
    app: pinot
    release: release-name
    app.kubernetes.io/version: "1.0.0"
    app.kubernetes.io/managed-by: Helm
    heritage: Helm
    component: broker
spec:
  selector:
    matchLabels:
      app: pinot
      release: release-name
      component: broker
  serviceName: release-name-pinot-broker-headless
  replicas: 1
  updateStrategy:
    type: RollingUpdate
  podManagementPolicy: Parallel
  template:
    metadata:
      labels:
        helm.sh/chart: pinot-0.2.9-SNAPSHOT
        app: pinot
        release: release-name
        app.kubernetes.io/version: "1.0.0"
        app.kubernetes.io/managed-by: Helm
        heritage: Helm
        component: broker
      annotations:
        checksum/config:b6426af6821d74c336050babe831e53b07c558ec0609fdfea5bf196a5f8ffd7e

        {}
    spec:
      terminationGracePeriodSeconds: 30
      serviceAccountName: release-name-pinot
      securityContext:
        {}
      nodeSelector:
        {}
      affinity:
        {}
      tolerations:
        []
      containers:
      - name: broker
        securityContext:
          {}
        image: "apachepinot/pinot:latest"
        imagePullPolicy: Always
        args: [
          "StartBroker",
          "-clusterName", "pinot-quickstart",
          "-zkAddress", "release-name-zookeeper:2181",
          "-configFileName", "/var/pinot/broker/config/pinot-broker.conf"
        ]
        env:
          - name: JAVA_OPTS
            value: "-XX:ActiveProcessorCount=2 -Xms256M -Xmx1G -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -Xlog:gc*:file=/opt/pinot/gc-pinot-broker.log -Dlog4j2.configurationFile=/opt/pinot/etc/conf/pinot-broker-log4j2.xml -Dplugins.dir=/opt/pinot/plugins"
          - name: LOG4J_CONSOLE_LEVEL
            value: info
        envFrom:
          []
        ports:
          - containerPort: 8099
            protocol: TCP
            name: broker                  
        volumeMounts:
          - name: config
            mountPath: /var/pinot/broker/config
        livenessProbe:
          initialDelaySeconds: 60
          periodSeconds: 10
          httpGet:
            path: /health
            port: 8099
        readinessProbe:
          initialDelaySeconds: 60
          periodSeconds: 10
          httpGet:
            path: /health
            port: 8099
        resources:
            requests:
              memory: 1.25Gi
      restartPolicy: Always
      volumes:
        - name: config
          configMap:
            name: release-name-pinot-broker-config
---
# Source: pinot/templates/controller/statefulset.yaml


apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: release-name-pinot-controller
  labels:
    helm.sh/chart: pinot-0.2.9-SNAPSHOT
    app: pinot
    release: release-name
    app.kubernetes.io/version: "1.0.0"
    app.kubernetes.io/managed-by: Helm
    heritage: Helm
    component: controller
spec:
  selector:
    matchLabels:
      app: pinot
      release: release-name
      component: controller
  serviceName: release-name-pinot-controller-headless
  replicas: 1
  updateStrategy:
    type: RollingUpdate
  podManagementPolicy: Parallel
  template:
    metadata:
      labels:
        helm.sh/chart: pinot-0.2.9-SNAPSHOT
        app: pinot
        release: release-name
        app.kubernetes.io/version: "1.0.0"
        app.kubernetes.io/managed-by: Helm
        heritage: Helm
        component: controller
      annotations:
        checksum/config:ee3073abb448053a09d52ba09889069d1c28d2d9626bb86c7c3e5f8f75a9736f

        {}
    spec:
      terminationGracePeriodSeconds: 30
      serviceAccountName: release-name-pinot
      securityContext:
        {}
      nodeSelector:
        {}
      affinity:
        {}
      tolerations:
        []
      containers:
      - name: controller
        securityContext:
          {}
        image: "apachepinot/pinot:latest"
        imagePullPolicy: Always
        args: [ "StartController", "-configFileName", "/var/pinot/controller/config/pinot-controller.conf" ]
        env:
          - name: JAVA_OPTS
            value: "-XX:ActiveProcessorCount=2 -Xms256M -Xmx1G -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -Xlog:gc*:file=/opt/pinot/gc-pinot-controller.log -Dlog4j2.configurationFile=/opt/pinot/etc/conf/pinot-controller-log4j2.xml -Dplugins.dir=/opt/pinot/plugins"
          - name: LOG4J_CONSOLE_LEVEL
            value: info
        envFrom:
          []
        ports:
          - containerPort: 9000
            protocol: TCP
            name: controller
        volumeMounts:
          - name: config
            mountPath: /var/pinot/controller/config
          - name: data
            mountPath: "/var/pinot/controller/data"
        resources:
            requests:
              memory: 1.25Gi
      restartPolicy: Always
      volumes:
      - name: config
        configMap:
          name: release-name-pinot-controller-config
  volumeClaimTemplates:
    - metadata:
        name: data
      spec:
        accessModes:
          - "ReadWriteOnce"
        resources:
          requests:
            storage: "1G"
---
# Source: pinot/templates/server/statefulset.yaml


apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: release-name-pinot-server
  labels:
    helm.sh/chart: pinot-0.2.9-SNAPSHOT
    app: pinot
    release: release-name
    app.kubernetes.io/version: "1.0.0"
    app.kubernetes.io/managed-by: Helm
    heritage: Helm
    component: server
spec:
  selector:
    matchLabels:
      app: pinot
      release: release-name
      component: server
  serviceName: release-name-pinot-server-headless
  replicas: 1
  updateStrategy:
    type: RollingUpdate
  podManagementPolicy: Parallel
  template:
    metadata:
      labels:
        helm.sh/chart: pinot-0.2.9-SNAPSHOT
        app: pinot
        release: release-name
        app.kubernetes.io/version: "1.0.0"
        app.kubernetes.io/managed-by: Helm
        heritage: Helm
        component: server
      annotations:
        checksum/config:9afbdb5bf6c23556934cfe4f46d3916d6203064fe3d62854815f2426ef43c6c3

        {}
    spec:
      terminationGracePeriodSeconds: 30
      serviceAccountName: release-name-pinot
      securityContext:
        {}
      nodeSelector:
        {}
      affinity:
        {}
      tolerations:
        []
      containers:
      - name: server
        securityContext:
          {}
        image: "apachepinot/pinot:latest"
        imagePullPolicy: Always
        args: [
          "StartServer",
          "-clusterName", "pinot-quickstart",
          "-zkAddress", "release-name-zookeeper:2181",
          "-configFileName", "/var/pinot/server/config/pinot-server.conf"
        ]
        env:
          - name: JAVA_OPTS
            value: "-Xms512M -Xmx1G -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -Xlog:gc*:file=/opt/pinot/gc-pinot-server.log -Dlog4j2.configurationFile=/opt/pinot/etc/conf/pinot-server-log4j2.xml -Dplugins.dir=/opt/pinot/plugins"
          - name: LOG4J_CONSOLE_LEVEL
            value: info
        envFrom:
          []
        ports:
          - containerPort: 8098
            protocol: TCP
            name: netty
          - containerPort: 8097
            protocol: TCP
            name: admin
        volumeMounts:
          - name: config
            mountPath: /var/pinot/server/config
          - name: data
            mountPath: "/var/pinot/server/data"
        resources:
            requests:
              memory: 1.25Gi
      restartPolicy: Always
      volumes:
        - name: config
          configMap:
            name: release-name-pinot-server-config
  volumeClaimTemplates:
    - metadata:
        name: data
      spec:
        accessModes:
          - "ReadWriteOnce"
        resources:
          requests:
            storage: 4G
---
# Source: pinot/templates/broker/service-external.yaml

---
# Source: pinot/templates/controller/service-external.yaml

---
# Source: pinot/templates/minion-stateless/pvc.yaml

---
# Source: pinot/templates/minion/configmap.yaml

---
# Source: pinot/templates/minion/service-headless.yaml

---
# Source: pinot/templates/minion/service.yaml

---
# Source: pinot/templates/minion/statefulset.yaml

@codecov-commenter
Copy link

codecov-commenter commented May 3, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 62.11%. Comparing base (59551e4) to head (51c38c0).
Report is 446 commits behind head on master.

Additional details and impacted files
@@             Coverage Diff              @@
##             master   #13059      +/-   ##
============================================
+ Coverage     61.75%   62.11%   +0.36%     
+ Complexity      207      198       -9     
============================================
  Files          2436     2515      +79     
  Lines        133233   137862    +4629     
  Branches      20636    21335     +699     
============================================
+ Hits          82274    85635    +3361     
- Misses        44911    45833     +922     
- Partials       6048     6394     +346     
Flag Coverage Δ
custom-integration1 <0.01% <ø> (-0.01%) ⬇️
integration <0.01% <ø> (-0.01%) ⬇️
integration1 <0.01% <ø> (-0.01%) ⬇️
integration2 ?
java-11 62.11% <ø> (+0.40%) ⬆️
java-21 <0.01% <ø> (-61.63%) ⬇️
skip-bytebuffers-false 62.11% <ø> (+0.36%) ⬆️
skip-bytebuffers-true <0.01% <ø> (-27.73%) ⬇️
temurin 62.11% <ø> (+0.36%) ⬆️
unittests 62.11% <ø> (+0.36%) ⬆️
unittests1 46.68% <ø> (-0.21%) ⬇️
unittests2 27.78% <ø> (+0.05%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

@Jackie-Jiang Jackie-Jiang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@xiangfu0 @zhtaoxiang Can you help take a look?

@@ -164,6 +164,10 @@ controller:

podAnnotations: {}

# set enabled as true, to automatically roll controller stateful set for configmap change
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to set it to true, or it should be false by default, and the comment just explain the behavior if we set it to true?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment explains the behaviour.
I think it should be false by default because setting it to true will result in a restart of the pod when the associated config map is changed.

@xiangfu0
Copy link
Contributor

Please also this for minion statefulSet as well: https://github.com/apache/pinot/blob/master/helm/pinot/templates/minion/statefulset.yaml

@xiangfu0 xiangfu0 merged commit 340286b into apache:master May 16, 2024
20 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants