Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubectl context support #59

Open
vquie opened this issue Mar 22, 2024 · 3 comments
Open

kubectl context support #59

vquie opened this issue Mar 22, 2024 · 3 comments

Comments

@vquie
Copy link

vquie commented Mar 22, 2024

Hi,

we are evaluating ArgoCD in our company where we use a lot of helmfile already. Hence this plugin is our saviour. :-)
We use a lot of hooks where we fetch data from the cluster with kubectl. It works like a charm when deploying to the cluster ArgoCD runs on but does not when working with multiple clusters. It seems that there is no actual context in kubeconfig.

I managed to exec this stupid bash script to add the content as annotation in my deployment.

#!/bin/bash

_CONFIG=$(kubectl config view | base64)

echo "${_CONFIG}"

It shows this:

apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

What am I missing?

ArgoCD version: v2.10.0
argo-cd-helmfile: v0.3.9

Thank you very much!

@travisghansen
Copy link
Owner

Welcome! So if I understand your comments correctly it's probably not a 'correct' approach. Argocd plugins do not actually run kubectl directly, they (plugins) simply produce yaml and send it back to core argocd for actually applying to a/the cluster.

A single argocd app can only have a single destination. To get around this you can use something like applicationsets to target multiple clusters.

The hooks you have made may not translate very well honestly (assuming you were referring to helmfile hooks) as the only thing this script really does is invoke helmfile template (again core argocd actually does the applying). So many pre/post hooks likely aren't getting invoked. Additionally the plugin itself has 0 direct ability to interact with the target cluster(s) as you have discovered with your kubectl testing.

Does that help fill the gaps?

@vquie
Copy link
Author

vquie commented Apr 2, 2024

Thank you. I had the assumption that helmfile would receive the current cluster as the default context every time it runs. I removed the hooks' context when running as argocd user to work around contexts. Unfortunately all my testings were done on the cluster ArgoCD runs on. Hooks are working just fine on that cluster.
Do you know any way to solve this? Since we use hooks a lot, this is a blocker for ArgoCD.

@travisghansen
Copy link
Owner

I think you could work something custom where within the hook use the service account of the plugin pod to fetch the creds (secret) for the target cluster from the argocd namespace and then use those for the context. Remember though, any hooks which are not invoked during the template command will not be used. The plugins do not apply directly to the cluster but rather simply generate yaml to hand back to argocd to apply.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants