Let's start by familiarizing ourselves with the goal of this activity.
Enforcing a deployment policy before deploying packages mitigate an attack not currently mentioned in the diagram which we could call (I) "deploy unauthorized package", e.g., a debug package to a prod environment, or a malicious (integrity protected) package to a prod environment.
Install the necessary software.
The admission controller is the component that deploys artifacts / containers. It needs to be configured to verify deployment attestations. Verification requires the following metadata:
- Trusted roots, which is the metadata that defines:
- Which evaluators we trust - defined by their identity.
- Which protection type (e.g., service account, cluster ID) each evaluator is authoritative for.
- Required protection types, which is an optional set of mandatory protection types.To be considered authentic and trusted, a deployment attestation must contain the required protection types.
- Mode of enforcement, such as "enforce" or "audit". This allows administrators to onboard new teams and roll out policy upgrades in stages.
- Failure handling, which configures how unexpected errors or timeouts during evaluation are handled. Fail open behavior admits deployments by default in such a scenario, whereas fail closed behavior defaults to a rejection. The low latency and reliability of using deployment attestations should make these occurrences rare in comparison to real time evaluation
For example, a trusted root could contain (a) public key pubKeyA as evaluator identity, (b) be authoritative for protection types "google service account" and "Kubernetes namespace" and (c) have the scope type "google service account" required. A deployment attestation is considered authentic and trusted if (a) it is signed using pubKeyA, (c and b) contains the protection types "google service account" and (b) may optionally contain a scope of type "Kubernetes namespace".
In this workshop, we use the open source policy engine Kyverno.
You should already have cosign installed as required for Activities 02 and 03. If that's not the case, follow the installation instructions.
In this demo, we use a local Kubernetes installation called minikube.
To install minikube, follow the instructions.
Start minikube:
# Kyverno 1.11.4 supports Kubernetes version v1.25-v.18, see https://kyverno.io/docs/installation/#compatibility-matrix.
$ minikube start --kubernetes-version=v1.28
Set the alias:
$ alias kubectl="minikube kubectl --"
Install Kyverno policy engine:
Optional: Open a new terminal and monitor the logs for the admission controller and keep this terminal open:
# Replace 'kyverno-admission-controller-6dd8fd446c-4qck5' with the value for your installation (previous command).
$ kubectl -n kyverno logs -f kyverno-admission-controller-6dd8fd446c-4qck5
We need to configure Kyverno to verify the deployment attestation we created in Activity 03.
There are two relevant files to configure it:
- A verification configuration file containing the trusted roots, in kyverno/slsa-configuration.yml.
- A Kyverno enforcer file kyverno/slsa-enforcer.yml that verifies the deployment attestation using the trusted roots.
Clone the repository locally. Then follow the steps:
- Update the attestation_creator field in the verification configuration file. Set it to the value of the evaluator identity that created your deployment attestation in Activity 03.
- Install the policy engine
$ kubectl apply -f kyverno/slsa-configuration.yml
$ kubectl apply -f kyverno/slsa-enforcer.yml
Let's deploy the container we built in Activity 01. For that, we will use the k8/echo-server-deployment.yml pod definition.
Follow these steps:
- Edit the image in the pod definition.
- WARNING: Since we are running Kubernetes locally, there is no google service account to match against. To simulate one exists for our demo, we make the assumption that its value is exposed via the "cloud.google.com.v1/service_account" annotation. Set the value to the service account configured in your deployment policy for the container.
- Deploy the container
$ kubectl apply -f k8/echo-server-deployment.yml
deployment.apps/echo-server-deployment created
Run the following commands to confirm the deployment succeeded:
$ kubectl get polr
NAME KIND NAME PASS FAIL WARN ERROR SKIP AGE
13f78700-4f91-44e6-aa1b-970ed83251dc ReplicaSet echo-server-deployment-5bcdd7d764 1 0 0 0 0 4m5s
6b8c9fe2-89fb-4388-92e3-67abdaf3feb0 Pod echo-server-deployment-5bcdd7d764-87cxt 1 0 0 0 0 4m35s
8863a504-63d6-4455-b9dd-79e15f2bd75f Pod echo-server-deployment-5bcdd7d764-2rrrm 1 0 0 0 0 4m35s
977d4976-7ce3-4d97-861a-8a119f3c5e84 Pod echo-server-deployment-5bcdd7d764-27h96 1 0 0 0 0 4m35s
c482b133-13b1-4678-bb2c-0de2d44c868d Deployment echo-server-deployment 1 0 0 0 0 4m36s
Now update the pod definition with an image that is not allowed to run under this service account:
- Edit the image in the deployment file.
$ kubectl apply -f k8/echo-server-deployment.yml
...THIS SHOULD FAIL...
Update the pod definition back to its original value.
- Edit the "cloud.google.com.v1/service_account" annotation to a different service account.
$ kubectl apply -f k8/echo-server-deployment.yml
...THIS SHOULD FAIL...
To our knowledge, the google service account is not available to Google's GKE. One way to deploy a real-world example of this demo is to bind Kubernetes service account to a google service account. This is out of scope of the workshop and we leave if for future work. If you take on this task, please share the code with us!
Can you update the policy engine to support other types of protections, e.g., GKE cluster ID, etc.
After completing this activity, you should be able to answer the following questions:
- What metadata is needed to configure the admission controller?