Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Argocd not using plugin and timeout on manual try #594

Open
janluak opened this issue Dec 20, 2023 · 0 comments
Open

Argocd not using plugin and timeout on manual try #594

janluak opened this issue Dec 20, 2023 · 0 comments

Comments

@janluak
Copy link

janluak commented Dec 20, 2023

Hey guys,

thanks for the help in advance :)

Describe the bug
Though having the avp sidecar running and all variables set the desired variable is not fetched from vault.
When running sh manually in the avp sidecar the command argocd-vault-plugin generate secret.yaml times out.

To Reproduce
These are my configs:

# plugin ConfigMap

apiVersion: v1
kind: ConfigMap
metadata:
  name: cmp-plugin
data:
  avp.yaml: |
    apiVersion: argoproj.io/v1alpha1
    kind: ConfigManagementPlugin
    metadata:
      name: argocd-vault-plugin
    spec:
      allowConcurrency: true
      discover:
        find:
          command:
            - sh
            - "-c"
            - "find . -name '*.yaml' | xargs -I {} grep \"<path\\|avp\\.kubernetes\\.io\" {} | grep ."
      generate:
        command:
          - argocd-vault-plugin
          - generate
          - "."
      lockRepo: false
# values.yaml

global:
  image:
    tag: v2.9.3
crds:
  install: false

configs:
  secret:
    extra:
      VAULT_ADDR: http://vault.secrets
      AVP_TYPE: vault
      AVP_AUTH_TYPE: k8s
      AVP_K8S_ROLE: argocd


repoServer:
  env:
    - name: VAULT_ADDR
      value: vault.secrets
    - name: AVP_TYPE
      value: vault
    - name: AVP_AUTH_TYPE
      value: k8s
    - name: AVP_K8S_ROLE
      value: argocd

  volumes:
    - configMap:
        name: cmp-plugin
      name: cmp-plugin
    - name: custom-tools
      emptyDir: { }
  initContainers:
    - name: download-tools
      image: custom image from python:3.11-alpine with argocd cli + curl
      imagePullPolicy: Always
      env:
        - name: AVP_VERSION
          value: 1.17.0
        - name: http_proxy
          value: http://123.123.123.123:80
        - name: https_proxy
          value: http://123.123.123.123:80
        - name: no_proxy
          value: .intern,.svc,.local
        - name: KUBERNETES_SERVICE_HOST
          value: kubernetes.default.svc
      command: [ sh, -c ]
      args:
        - >-
          curl -L https://github.com/argoproj-labs/argocd-vault-plugin/releases/download/v$(AVP_VERSION)/argocd-vault-plugin_$(AVP_VERSION)_linux_amd64 -o argocd-vault-plugin &&
          chmod +x argocd-vault-plugin &&
          mv argocd-vault-plugin /custom-tools/
      volumeMounts:
        - mountPath: /custom-tools
          name: custom-tools
  extraContainers:
    - name: avp
      command: [ /var/run/argocd/argocd-cmp-server ]
      image: quay.io/argoproj/argocd:v2.9.3
      env:
        - name: VAULT_ADDR
          value: http://secrets-management-vault.secrets
        - name: AVP_TYPE
          value: vault
        - name: AVP_AUTH_TYPE
          value: k8s
        - name: AVP_K8S_ROLE
          value: argocd
      volumeMounts:
        - mountPath: /var/run/argocd
          name: var-files
        - mountPath: /home/argocd/cmp-server/plugins
          name: plugins
        - mountPath: /tmp
          name: tmp

        # Register plugins into sidecar
        - mountPath: /home/argocd/cmp-server/config/plugin.yaml
          subPath: avp.yaml
          name: cmp-plugin

        # Important: Mount tools into $PATH
        - name: custom-tools
          subPath: argocd-vault-plugin
          mountPath: /usr/local/bin/argocd-vault-plugin
# secret.yaml to parse to

apiVersion: v1
kind: Secret
metadata:
  name: secret
  namespace: {{ .Release.Namespace }}
  annotations:
    avp.kubernetes.io/path: "admin-secrets/my-secret"
stringData:
  client-secret: "<client-secret>"

Expected behavior
For making sure my vault config is correct I added a deployment (see details) using the default agent-inject method → works.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: vault-access-check
  namespace: {{ .Release.Namespace }}
  labels:
    kind: examples
    app: {{ .Release.Name }}
spec:
  selector:
    matchLabels:
      kind: examples
      app: {{ .Release.Name }}
  template:
    metadata:
      labels:
        kind: examples
        app: {{ .Release.Name }}
      annotations:
        vault.hashicorp.com/agent-inject: "true"
        vault.hashicorp.com/role: "argocd"
        vault.hashicorp.com/agent-inject-secret-test.json: "admin-secrets/my-secret"
    spec:
      serviceAccountName: argocd-test-repo-server
      containers:
        - name: nginx
          image: nginx
          ports:
            - containerPort: 80

When interactively shelling in the avp sidecar container and copying the secret.yaml there I tried the command argocd-vault-plugin generate secret.yaml which results in a timeout.
When simply asking argocd to apply the secret.yaml from git the secret is not fetched at all.

Screenshots/Verbose output

argocd-vault-plugin generate secret.yaml --verbose-sensitive-output
2023/12/20 08:08:49 reading configuration from environment, overriding any previous settings
2023/12/20 08:08:49 AVP configured with the following settings:

2023/12/20 08:08:49 avp_kv_version: 2

2023/12/20 08:08:49 Hashicorp Vault cannot retrieve cached token: stat /home/argocd/.avp/config.json: no such file or directory. Generating a new one
2023/12/20 08:08:49 Hashicorp Vault authenticating with Vault role argocd using Kubernetes service account token /var/run/secrets/kubernetes.io/serviceaccount/token read from ***
Error: context deadline exceeded
Usage:
  argocd-vault-plugin generate <path> [flags]

Additional context
The installation of argo is called argocd-test and in namespace argocd-test to not interfere with the default installation on the cluster.

I tried playing around with the cluster role binding as mentioned in the docs somewhere but this didn't really help...

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: role-tokenreview-binding
  namespace: secrets | default | argocd-test
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
  - kind: ServiceAccount
    name: secrets-management-vault
    namespace: secrets | default
  - kind: ServiceAccount
    name: argocd-test-repo-server
    namespace: argocd-test | default

Additionally, I tried with a vault token on root level and the token method → same issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant