Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add volumes support to ExecutorPlugin #13026

Open
dgolja opened this issue May 9, 2024 · 3 comments
Open

add volumes support to ExecutorPlugin #13026

dgolja opened this issue May 9, 2024 · 3 comments
Labels
area/agent Argo Agent that runs for HTTP and Plugin templates area/executor area/plugins solution/suggested A solution to the bug has been suggested. Someone needs to implement it. type/feature Feature request

Comments

@dgolja
Copy link

dgolja commented May 9, 2024

Summary

I was looking at the executor plugins spec definition, which is basically subset of the containers definition. I was wondering if we should also extend the spec with the Volumes definition or is there an security issue with that? I am mostly after a way how I could mount secrets/config maps volumes inside the plugin(s).

Use Cases

apiVersion: argoproj.io/v1alpha1
kind: ExecutorPlugin
metadata:
  name: hello
spec:
  sidecar:
    volumes:
      - name: secret-volume
        secret:
          secretName: dotfile-secret
    container:
      # ...
      volumeMounts:
      - name: secret-volume
        readOnly: true
        mountPath: "/etc/secret-volume"
      # ...

I believe there is value in having access to secret volumes, which can be managed through ESO for specific use cases where you want to dynamically manage secrets for plugins.


Message from the maintainers:

Love this feature request? Give it a 👍. We prioritise the proposals with the most 👍.

@dgolja dgolja added the type/feature Feature request label May 9, 2024
@dgolja
Copy link
Author

dgolja commented May 9, 2024

If no disagreement happy to provide the code implementation.

@agilgur5 agilgur5 added area/executor area/plugins area/agent Argo Agent that runs for HTTP and Plugin templates labels May 15, 2024
@agilgur5
Copy link
Member

As previously discussed on Slack, I couldn't think of any problems with it at the time, so I imagine it might've just never been implemented or requested.

Giving it a bit more thought, the only possible issue I see with the proposal is where volumes are defined. Those are currently defined at the Workflow or template-level. Template-level roughly aligns to Pod-level, which is where k8s defines them.
volumeMounts at the container-level are standard though.

So I'm not sure about volumes defined in plugin definition, but volumeMounts for sure make sense.

Implementation-wise, that would be around this part of the agent code. There's some existing volume mounts (for e.g. SA tokens) that have to be handled as well.

I believe there is value in having access to secret volumes, which can be managed through ESO for specific use cases where you want to dynamically manage secrets for plugins.

Also as I mentioned on Slack, there is support for using k8s Secrets in plugins already. We should add an example in those docs with this feature.

@agilgur5 agilgur5 added the solution/suggested A solution to the bug has been suggested. Someone needs to implement it. label May 15, 2024
@dgolja
Copy link
Author

dgolja commented May 20, 2024

Thank you for looking at it. The more I think about the problem and become familiar with the code, the less sure I am about the best place to define the volumes.

I have a working solution if I just extend the ExecutorPlugin spec adding sidecar.volumes.

git patch without tests:

diff --git a/pkg/plugins/spec/plugin_types.go b/pkg/plugins/spec/plugin_types.go
index 28cdb73ce..61698d4ac 100644
--- a/pkg/plugins/spec/plugin_types.go
+++ b/pkg/plugins/spec/plugin_types.go
@@ -28,6 +28,7 @@ type Sidecar struct {
 	// AutomountServiceAccount mounts the service account's token. The service account must have the same name as the plugin.
 	AutomountServiceAccountToken bool            `json:"automountServiceAccountToken,omitempty"`
 	Container                    apiv1.Container `json:"container"`
+	Volumes                      []apiv1.Volume  `json:"volumes,omitempty"`
 }
 
 func (s Sidecar) Validate() error {
diff --git a/workflow/controller/agent.go b/workflow/controller/agent.go
index 4c94329f9..6d827c3f9 100644
--- a/workflow/controller/agent.go
+++ b/workflow/controller/agent.go
@@ -285,6 +285,7 @@ func (woc *wfOperationCtx) getExecutorPlugins(ctx context.Context) ([]apiv1.Cont
 				volumes = append(volumes, *volume)
 				c.VolumeMounts = append(c.VolumeMounts, *volumeMount)
 			}
+			volumes = append(volumes, s.Volumes...)
 			sidecars = append(sidecars, *c)
 		}
 	}
diff --git a/workflow/util/plugins/configmap.go b/workflow/util/plugins/configmap.go
index a97df800a..a0112c6df 100644
--- a/workflow/util/plugins/configmap.go
+++ b/workflow/util/plugins/configmap.go
@@ -38,6 +38,15 @@ func ToConfigMap(p *spec.Plugin) (*apiv1.ConfigMap, error) {
 			"sidecar.container":                    string(data),
 		},
 	}
+
+	if p.Spec.Sidecar.Volumes != nil {
+		volumes, err := yaml.Marshal(p.Spec.Sidecar.Volumes)
+		if err != nil {
+			return nil, err
+		}
+		cm.Data["sidecar.volumes"] = string(volumes)
+	}
+
 	for k, v := range p.Annotations {
 		cm.Annotations[k] = v
 	}
@@ -69,5 +78,8 @@ func FromConfigMap(cm *apiv1.ConfigMap) (*spec.Plugin, error) {
 	if err := yaml.UnmarshalStrict([]byte(cm.Data["sidecar.container"]), &p.Spec.Sidecar.Container); err != nil {
 		return nil, err
 	}
+	if err := yaml.UnmarshalStrict([]byte(cm.Data["sidecar.volumes"]), &p.Spec.Sidecar.Volumes); err != nil {
+		return nil, err
+	}
 	return p, p.Validate()
 }

What I do not like about this solution is that, due to how the executor plugin works, whenever I use any plugin, the agent pod will always create/mount that volume. Before checking the code, I misunderstood that only the plugin you define in the workflow gets added to the agent pod, but it appears that ALL the plugins are always defined in the agent pod.

For example If I define plugin-1 and use it in my workflow. The agent pod will looks something like:

apiVersion: v1
kind: Pod
spec:
  volumes:
  # ...standard agent volumes...
  containers:
  - name: plugin-1
    args: ...
  - name: main
    command:
    - argoexec
    args:
    - agent
    - main
    - --loglevel
    - debug
    # ...
  initContainers:
  - name: init
    args:
    - agent
    - init
    - --loglevel
    - debug
    # ...

If I add plugin-2 which uses a volume the agent pod will look like:

apiVersion: v1
kind: Pod
spec:
  volumes:
  # ... standard agent volumes ...
  # ... plugin-2 volumes ...
  containers:
  - name: plugin-1
    args: ...
  - name: plugin-2
    args: ...
    volumeMounts:
      - ...
  - name: main
    args:
    - agent
    - init
    - --loglevel
    # ...
  initContainers:
  - name: init
    args:
    - agent
    - init
    - --loglevel
    - debug
    # ...

From now on, whatever plugin I use in my workflow, the agent pod will still define all of them. This is really inefficient because the more plugins I define, the more resources I will need to run existing workflows with plugins, regardless of whether I use all of them or just one.

Anyway back to the topic. Originally, I was mostly interested in using ConfigMaps or Secret volumes, so for these use cases the above solution may work. However, things can get a bit messier if someone uses persistentVolumeClaim. One way to address this would be to limit the volume types that can be defined for the ExecutorPlugin.

Anyway I hope the explanation make sense. I will try to reach to you on slack, because it may be easier to discuss it there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/agent Argo Agent that runs for HTTP and Plugin templates area/executor area/plugins solution/suggested A solution to the bug has been suggested. Someone needs to implement it. type/feature Feature request
Projects
None yet
Development

No branches or pull requests

2 participants