Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

K8s secrets reference from step #3655

Draft
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

zc-devs
Copy link
Contributor

@zc-devs zc-devs commented Apr 27, 2024

Part of #3582

Pipeline:

skip_clone: true
steps:
  secret-test:
    image: alpine
    commands:
      - echo $$AWS_ACCESS_KEY_ID
      - echo $$AWS_SECRET_ACCESS_KEY
    backend_options:
      kubernetes:
        secretNames:
          - test-secret

Test secret:

apiVersion: v1
kind: Secret
metadata:
  name: test-secret
  namespace: test-woodpecker-runtime
data:
  AWS_ACCESS_KEY_ID: N0lrZHNzb0xleGFtcGxlNkpJVnZDRXE=
  AWS_SECRET_ACCESS_KEY: TXk0WE5BYXNleGFtcGxldXRKRHNUWHc=
type: Opaque
  1. Default Agent config.

In log I see warning:

{"level":"debug","time":"2024-04-27T18:17:49Z","caller":"/src/pipeline/backend/kubernetes/pod.go:166","message":"Secret names were defined in backend options, but its using disallowed by instance configuration "}

WP output:

+ echo $AWS_ACCESS_KEY_ID

+ echo $AWS_SECRET_ACCESS_KEY

  1. Allowing native secrets.

Agent config:

data:
  WOODPECKER_BACKEND: kubernetes
  WOODPECKER_BACKEND_K8S_NATIVE_SECRETS_ALLOW_FROM_STEP: 'true'
  ...

WP output:

+ echo $AWS_ACCESS_KEY_ID
7IkdssoLexample6JIVvCEq
+ echo $AWS_SECRET_ACCESS_KEY
My4XNAasexampleutJDsTXw

@qwerty287 qwerty287 added enhancement improve existing features backend/kubernetes labels Apr 28, 2024
@qwerty287
Copy link
Contributor

Not sure how complex this is to implement, but if you use internal secrets and print them out to the logs, they're replaced with ********. Can we apply something similar here too? If not, this needs a warning in the docs.

Also, in my opinion WOODPECKER_BACKEND_K8S_NATIVE_SECRETS_ALLOW_FROM_STEP is way too long for an env name - wouldn't WOODPECKER_BACKEND_K8S_ALLOW_NATIVE_SECRETS be enough?

As an alternative to native support, maybe we could implement http service extension doing this? This feature is still in development and not supported yet. I understand however if you would like to support it natively.

@zc-devs
Copy link
Contributor Author

zc-devs commented Apr 28, 2024

if you use internal secrets and print them out to the logs, they're replaced with ********

I know, i know... Also thought about it and decided to warn in the docs too.
Now we don't have secrets to find them in the logs. However, when second part is implemented, this is achievable.

WOODPECKER_BACKEND_K8S_ALLOW_NATIVE_SECRETS

Will take it.

http service extension doing this?

Extension or external service?
I don't like extensions :) but like idea of external (micro)service. It might be useful in terms of duplication of masking secrets logic, at least.

Seems, I cannot manage it alone.
Is there some example / skeleton of service? I have ideas of service implementation, though. So I can do it, but need the Go "template". Also help with Server-side code might be needed.
How it should be organized? In this repo? Then probably POC branch is needed? Otherwise separate repo?

Edit
Can the external secrets work with internal at the same time?

@qwerty287
Copy link
Contributor

qwerty287 commented Apr 28, 2024

As I wrote, external services are not yet available and still in development... They'll work similar to config extensions (https://woodpecker-ci.org/docs/administration/external-configuration-api) but with possibility so set on them global/org/user/repo level.
We can extend this to be able to use both internal and external secrets combined together.
But probably this feature will take some more time to complete...

@zc-devs
Copy link
Contributor Author

zc-devs commented Apr 28, 2024

But probably this feature will take some more time to complete

Let's give this PR a try then. Could you add build_image label, please?

@dominic-p, could you test then?

@qwerty287 qwerty287 added the build_pr_images If set, the CI will build images for this PR and push to Dockerhub label Apr 28, 2024
@dominic-p
Copy link
Contributor

Thanks for working on this! I'm ready to test this PR as soon as we get an image published.

@zc-devs
Copy link
Contributor Author

zc-devs commented May 1, 2024

And this too

@@ -236,6 +244,25 @@ func containerPort(port types.Port) v1.ContainerPort {
}
}

func containerSecrets(secretNames []string) []v1.EnvFromSource {
if secretNames == nil || len(secretNames) == 0 {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if secretNames == nil || len(secretNames) == 0 {
if len(secretNames) == 0 {

if config.NativeSecretsAllowFromStep {
container.EnvFrom = containerSecrets(options.SecretNames)
} else {
log.Debug().Msg("Secret names were defined in backend options, but its using disallowed by instance configuration ")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor nit, this doesn't read quite right to me. I would go with:

Secret names were defined in backend options, but secret access is disallowed by instance configuration.

@dominic-p
Copy link
Contributor

Ok, I was able to test this tonight, and it seems to be working as expected! I was able to successfully build and push my first image to my local repo! 🎉🎉🎉

I did run into one snag. As I mentioned, my main use case for this feature is to provide container image registry credentials to my build script. These credentials are currently stored in a Kubernetes registry cred secret. This kind of secret uses the hardcoded key .dockerconfigjson to reference its data. The issue is that with the current implementation we wind up with an environment variable with a dot in its name. I had a hard time using it since env variable names really aren't supposed to have dots in them.

I was able to work around the issue with printenv, but it's kind of awkward. I'm not sure if this problem is worth fixing in the current iteration of this feature or not. Maybe we can just add a short blurb to the documentation about it? Or, a simple solution might be to replace any special characters in the secret key names with underscores and then document that convention?

@zc-devs
Copy link
Contributor Author

zc-devs commented May 3, 2024

Could you provide an example?

  1. Secret.
  2. Build tool (kaniko, buildx, etc.) and its configuration.

I use kaniko debug image and plain secret:

    secrets:
      - docker_usr
      - docker_pwd

kaniko requires Docker auth file in /kaniko/.docker/config.json, so I create this file manually:

DOCKER_AUTHS=$(printf '{"auths":{"%s":{"auth":"%s"}}}' "$REGISTRY" "$AUTH")
printf '%s' "$DOCKER_AUTHS" > "/kaniko/.docker/config.json"

In my use-case it could be requirements:

  1. Existing dockerconfigjson secret
  2. Mount as file to /kaniko/.docker/config.json

Current implementation treat .dockerconfigjson as env var. Suppose it is DOCKER_AUTHS instead. What do you do next? Do you write it in a file and point your build tool to it?

@dominic-p
Copy link
Contributor

Thanks for the detailed feedback.

Here's an example secret from my cluster:

apiVersion: v1
type: kubernetes.io/dockerconfigjson
kind: Secret
metadata:
  name: reg-cred
  namespace: woodpcker
data:
  .dockerconfigjson: <base64 encoded JSON auth file here>

And, here's a portion of the build script that would use it. I'm using buildah to build my container images via a standard shell script.

#!/bin/sh

# Create a working container
dev=$(buildah from docker.io/alpine)

# ... build the image

# Save credentials in temporary file

# You would like to do something like this, but, of course, it won't work
# echo "$.dockerconfigjson" > /tmp/.dockerconfigjson

printenv ".dockerconfigjson" > /tmp/.dockerconfigjson

# Save and push the image to our local registry
buildah commit $dev "$name"

buildah push --authfile /tmp/.dockerconfigjson "$name" "docker://$registry_url/$name"

So, the simplest workflow for me is to simply dump the reg-cred secret data into a temporary file and then reference it as shown above.

Of course, I could use a a regular Opaque secret and build the docker config JSON as you showed in your example (or reference the username and password with buildah push --creds), but since I already have the JSON in an existing secret it would be nice to leverage it. One benefit of doing it this way is that Kubernetes requires you to use this kind of secret in spec.imagePullSecrets. So, I can use the same secret to push to the local registry and pull from it.

At the end of the day, this is not a big deal. There are many usable workarounds. I just wanted to point out the awkwardness that you get when you wind up with an env variable that has a dot in its name.

@zc-devs
Copy link
Contributor Author

zc-devs commented May 6, 2024

Thanks for explanation. This PR have to be reworked then. Meantime I move it to a draft.

@zc-devs zc-devs marked this pull request as draft May 6, 2024 18:54
@dominic-p
Copy link
Contributor

Sure thing. My personal preference would be to release this as-is and then iterate from there. Even in its current form it's enough for my use case. But, I can understand if you want to wait until it's a bit more complete.

@zc-devs
Copy link
Contributor Author

zc-devs commented May 6, 2024

iterate from there

Sure. But we have to decide on syntax.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backend/kubernetes build_pr_images If set, the CI will build images for this PR and push to Dockerhub enhancement improve existing features
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants