Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Configmap Names for AccessRules are misaligned between oathkeeper and oathkeeper-maester #512

Open
3 of 6 tasks
stefan-schweer opened this issue Sep 16, 2022 · 4 comments
Labels
bug Something is not working.

Comments

@stefan-schweer
Copy link

stefan-schweer commented Sep 16, 2022

Preflight checklist

Describe the bug

the configmapnames in oathkeeper and oathkeeper maester are not aligned and there seems to be no option to make oathkeeper actually use the config map generated by oathkeeper-maester

Reproducing the bug

  1. run helm template ory/oathkeeper
  2. check the value for the --rulesConfigmapName option in the command of the oathkeeper-maester deployment
    and compare it with the configmap name in the volume section of the oathkeeper deployment

Relevant log output

No response

Relevant configuration

No response

Version

0.25.3

On which operating system are you observing this issue?

Linux

In which environment are you deploying?

Kubernetes

@stefan-schweer stefan-schweer added the bug Something is not working. label Sep 16, 2022
@stefan-schweer stefan-schweer changed the title Configmap Names for AccessRules are misaligned between oathkeeper and oahtkeeper-maester Configmap Names for AccessRules are misaligned between oathkeeper and oathkeeper-maester Sep 16, 2022
@stefan-schweer
Copy link
Author

there is a workaround by setting oathkeeper-maester.oathkeeperFullnameOverride to the full name of oathkeeper (Chart.Name + Release.Name)

@Demonsthere
Copy link
Collaborator

This could be improved by setting the FullnameOverride in hydra values for the maester controller, sadly we cannot out of the box derive the name without using the tpl mechanism

@Demonsthere
Copy link
Collaborator

Hi there
So, following your lead running helm template oathkeeper helm/charts/oathkeeper --debug
oathkeeper-deployment:

---
# Source: oathkeeper/templates/deployment-controller.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: oathkeeper
  namespace: default
  labels:
    app.kubernetes.io/name: oathkeeper
    helm.sh/chart: oathkeeper-0.25.4
    app.kubernetes.io/instance: oathkeeper
    app.kubernetes.io/version: "v0.39.0"
    app.kubernetes.io/managed-by: Helm
  annotations:
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: oathkeeper
      app.kubernetes.io/instance: oathkeeper
  template:
    metadata:
      labels:
        app.kubernetes.io/name: oathkeeper
        helm.sh/chart: oathkeeper-0.25.4
        app.kubernetes.io/instance: oathkeeper
        app.kubernetes.io/version: "v0.39.0"
        app.kubernetes.io/managed-by: Helm
      annotations:        
        checksum/oathkeeper-config: 069a0b48be053add1a3c40ebe9ca1557e38de8746af82a91c86c08a93a5d7da3
        checksum/oathkeeper-rules: 5a23cbc9399111fa2de7e1fe1c78e17a183c0fee761b70f8a2fb95027f509d4d
        checksum/oauthkeeper-secrets: 2a459e207048049f6c21e0303aebeebabe097204294fa7e2ca78e5f38d2fb707
    spec:
      volumes:
        - name: oathkeeper-config-volume
          configMap:
            name: oathkeeper-config
        - name: oathkeeper-rules-volume
          configMap:
            name: oathkeeper-rules
        - name: oathkeeper-secrets-volume
          secret:
            secretName: oathkeeper
      serviceAccountName: oathkeeper
      automountServiceAccountToken: false
      initContainers:
      containers:
        - name: oathkeeper
          image: "oryd/oathkeeper:v0.39.0"
          imagePullPolicy: IfNotPresent
          command: 
            - "oathkeeper"
          args:
            - "serve"
            - "--config" 
            - "/etc/config/config.yaml"
          env:
          volumeMounts:
            - name: oathkeeper-config-volume
              mountPath: /etc/config
              readOnly: true
            - name: oathkeeper-rules-volume
              mountPath: /etc/rules
              readOnly: true
            - name: oathkeeper-secrets-volume
              mountPath: /etc/secrets
              readOnly: true
          ports:
            - name: http-api
              containerPort: 4456
              protocol: TCP
            - name: http-proxy
              containerPort: 4455
              protocol: TCP
            - name: http-metrics
              protocol: TCP
              containerPort: 9000
          livenessProbe:
            httpGet:
              path: /health/alive
              port: http-api
          readinessProbe:
            httpGet:
              path: /health/ready
              port: http-api
          resources:
            {}
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
              - ALL
            privileged: false
            readOnlyRootFilesystem: true
            runAsNonRoot: true
            runAsUser: 1000

The mounted CM:

configMap:
            name: oathkeeper-rules

and the CM itself:

---
# Source: oathkeeper/templates/configmap-rules.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: oathkeeper-rules
  namespace: default
  labels:
    app.kubernetes.io/name: oathkeeper
    helm.sh/chart: oathkeeper-0.25.4
    app.kubernetes.io/instance: oathkeeper
    app.kubernetes.io/version: "v0.39.0"
    app.kubernetes.io/managed-by: Helm
data:
  "access-rules.json": |-
    []

maester-deployment:

---
# Source: oathkeeper/charts/oathkeeper-maester/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: oathkeeper-oathkeeper-maester
  namespace: default
  labels:
    app.kubernetes.io/name: oathkeeper-maester
    helm.sh/chart: oathkeeper-maester-0.25.4
    app.kubernetes.io/instance: oathkeeper
    app.kubernetes.io/version: "v0.1.7"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      control-plane: controller-manager
      app.kubernetes.io/name: oathkeeper-oathkeeper-maester
      app.kubernetes.io/instance: oathkeeper
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        control-plane: controller-manager
        app.kubernetes.io/name: oathkeeper-oathkeeper-maester
        app.kubernetes.io/instance: oathkeeper
      annotations:
    spec:
      containers:
        - name: oathkeeper-maester
          image: "oryd/oathkeeper-maester:v0.1.7"
          imagePullPolicy: IfNotPresent
          command:
            - /manager
          args:
            - --metrics-addr=0.0.0.0:8080
            - controller
            - --rulesConfigmapName=oathkeeper-rules
            - --rulesConfigmapNamespace=default
          env:
          resources:
            {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
              - ALL
            privileged: false
            readOnlyRootFilesystem: true
            runAsNonRoot: true
            runAsUser: 1000
      serviceAccountName: oathkeeper-oathkeeper-maester-account
      automountServiceAccountToken: true
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      securityContext: {}
      terminationGracePeriodSeconds: 10
      nodeSelector:

and the ruleName - --rulesConfigmapName=oathkeeper-rules

It looks like it matches well 🤔

@Demonsthere
Copy link
Collaborator

Ok, however, if we install the chart with a different name helm template foobar helm/charts/oathkeeper --debug then there is in fact a mismatch: foobar-oathkeeper-rules vs --rulesConfigmapName=foobar-rules

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something is not working.
Projects
None yet
Development

No branches or pull requests

2 participants