Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support the use of nacos as storage when deploying with k8s #642

Open
keepal7 opened this issue Nov 17, 2023 · 9 comments · May be fixed by #796
Open

Support the use of nacos as storage when deploying with k8s #642

keepal7 opened this issue Nov 17, 2023 · 9 comments · May be fixed by #796

Comments

@keepal7
Copy link

keepal7 commented Nov 17, 2023

backgroud & feature-description

Many companies use multiple k8s clusters to form federated clusters.

Some business teams want to maintain a higress cluster on their own, rather than applying it to the entire company's infrastructure.

When it is not possible to deploy higress in each k8s cluster, a separate component is needed to store higress metadata. Nacos is a good choice.

@CH3CHO
Copy link
Collaborator

CH3CHO commented Nov 17, 2023

Task overview:

  1. Deploy Higress API Server in the K8s cluster:https://github.com/higress-group/higress-standalone/tree/main/src/apiserver
  2. Make Higress Controller pulling resources from Higress API Server instead of K8s API Server

@CH3CHO
Copy link
Collaborator

CH3CHO commented Nov 22, 2023

Step 1: Generate Higress API Server configurations

Install the standalone version of Higress with builtin Nacos.

curl -fsSL https://higress.io/standalone/get-higress.sh | bash -s -- -a --use-builtin-nacos

Save all files in ./higress/compose/volumes/api and ./higress/compose/volumes/kube for future use.

Clean up: Execute ./higress/bin/reset.sh and delete ./higress.

Step 2: Deploy Higress API Server in K8s

Create a Secret resource to store configurations to be used by Higress API Server.

apiVersion: v1
kind: Secret
metadata:
  name: higress-apiserver
  namespace: higress-system
data:
  ca.crt: $(cat api/ca.crt)
  ca.key: $(cat api/ca.key)
  server.crt: $(cat api/server.crt)
  server.key: $(cat api/server.key)
  client.key: $(cat api/client.crt)
  nacos.key: $(cat api/nacos.key)

Create a Deployment resource for Higress API Server.

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: higress-apiserver
    higress: higress-apiserver
  name: higress-apiserver
  namespace: higress-system
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: higress-apiserver
      higress: higress-apiserver
  strategy:
    rollingUpdate:
      maxSurge: 100%
      maxUnavailable: 100%
    type: RollingUpdate
  template:
    metadata:
      annotations:
        sidecar.istio.io/inject: "false"
      labels:
        app: higress-apiserver
        higress: higress-apiserver
        sidecar.istio.io/inject: "false"
    spec:
      containers:
      - args:
        - --secure-port
        - "8443"
        - --client-ca-file
        - /etc/api/ca.crt
        - --tls-cert-file
        - /etc/api/server.crt
        - --tls-private-key-file
        - /etc/api/server.key
        - --storage
        - nacos
        - --nacos-server
        - http://NACOS_SERVER_IP:8848
        - --nacos-username
        - NACOS_USERNAME
        - --nacos-password
        - NACOS_PASSWORD
        - --nacos-ns-id
        - NACOS_NS_ID
        - --nacos-encryption-key-file
        - /etc/api/nacos.key
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: spec.nodeName
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: INSTANCE_IP
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.podIP
        - name: HOST_IP
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.hostIP
        image: higress-registry.cn-hangzhou.cr.aliyuncs.com/higress/api-server:0.0.10
        imagePullPolicy: IfNotPresent
        name: higress-apiserver
        ports:
        - containerPort: 8443
          hostPort: 8443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 30
          httpGet:
            path: /readyz
            port: 8443
            scheme: HTTPS
          initialDelaySeconds: 1
          periodSeconds: 2
          successThreshold: 1
          timeoutSeconds: 3
        resources: {}
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          privileged: false
          readOnlyRootFilesystem: true
          runAsGroup: 1337
          runAsNonRoot: true
          runAsUser: 1337
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /etc/api
          name: config
          readOnly: true
        - mountPath: /tmp/nacos
          name: nacos-data
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      terminationGracePeriodSeconds: 30
      volumes:
      - name: config
        secret:
          defaultMode: 420
          secretName: higress-apiserver
      - emptyDir: {}
        name: nacos-data

Create a Service resource for Higress API Server.

apiVersion: v1
kind: Service
metadata:
  labels:
    app: higress-apiserver
    higress: higress-apiserver
  name: higress-apiserver
  namespace: higress-system
spec:
  ports:
  - name: https
    port: 8443
    protocol: TCP
    targetPort: 8443
  selector:
    app: higress-apiserver
    higress: higress-apiserver
  type: ClusterIP

Step 3: Point Higress Controller to Higress API Server

Add kubeconfig file into the higress-config ConfigMap.

apiVersion: v1
data:
  kubeconfig: |
    apiVersion: v1
    kind: Config
    clusters: ....
  mesh: |-
    accessLogEncoding: TEXT
    accessLogFile: /dev/stdout
    accessLogFormat: ...
  meshNetworks: 'networks: {}'
kind: ConfigMap
metadata:
  ...

Update the higress-controller deployment. Update args field of higress-core container.

      - args:
        - serve
        - --kubeconfig=/etc/istio/config/kubeconfig
        - --gatewaySelectorKey=higress
        - --gatewaySelectorValue=higress-system-higress-gateway
        - --ingressClass=higress

If everythong goes well, you should be able to find a config item named `` in the corresponding namespace in Higress.

image

All set. You can create Ingress resources in Nacos now.

Create a config with name prefix ingresses., put Ingress YAML data into it, and save.

image

@sjcsjc123
Copy link
Collaborator

开发思路:
1、在helm/values.yaml添加enableNacosStorage参数,并调整helm/core/templates下的模板文件
2、通过模板文件启动higress-api-server,并修改configmap模板文件,使其指向higress-api-server
3、测试步骤:在make install-dev下的helm命令,添加enableNacosStorage启动参数,并通过helm安装higress,测试在nacos添加ingress资源,是否生效(通过e2e测试)

整体开发修改大致只位于helm目录下,这个思路可以的话,我可以尝试一下

@johnlanni
Copy link
Collaborator

@sjcsjc123 把higress-api-server作为higress-controller里的一个sidecar容器,另外因为sidecar容器无法控制启动顺序,higress-core容器又必须依赖higress-api-server启动,所以可能需要调整下higress-core容器的entrypoint启动脚本,等到higress-api-server的端口监听后再开始启动

@johnlanni johnlanni removed the help wanted Extra attention is needed label Jan 12, 2024
@johnlanni
Copy link
Collaborator

如线下沟通,目前要用console的功能,还依赖事先有一些配置存储,这块需要console做一下初始化。@CH3CHO 帮忙跟进一下

@lcfang
Copy link

lcfang commented Jan 16, 2024

是否可以提供一套标准化接口,可以让不同的注册中心来对接使用呢?

@sjcsjc123
Copy link
Collaborator

是否可以提供一套标准化接口,可以让不同的注册中心来对接使用呢?

后续可以提供的,我可以在apiserver上进行修改

@CH3CHO
Copy link
Collaborator

CH3CHO commented Jan 19, 2024

是否可以提供一套标准化接口,可以让不同的注册中心来对接使用呢?

其实直接对接 Storage 就行了,这个也是标准的接口。

@2456868764
Copy link
Collaborator

是否可以提供一套标准化接口,可以让不同的注册中心来对接使用呢?

其实直接对接 Storage 就行了,这个也是标准的接口。

可以考虑基于 API server Storage 接口开发支持比如 apollo, redis, mysql等后端存储。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: In Progress
Development

Successfully merging a pull request may close this issue.

6 participants