Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add pod security context from func.yaml #2108

Open
zalsader opened this issue Dec 20, 2023 · 6 comments · May be fixed by #2174
Open

Add pod security context from func.yaml #2108

zalsader opened this issue Dec 20, 2023 · 6 comments · May be fixed by #2174
Labels
kind/good-first-issue Denotes an issue ready for a new contributor.

Comments

@zalsader
Copy link
Contributor

zalsader commented Dec 20, 2023

It would be nice to be able to set the pod security context in func.yaml. I ran into this when I was trying to mount a pvc, and I could not write to the pvc. After a lot of digging, I fould that I needed to set fsGroup like this:

kubectl patch services.serving/<name> --type merge \
    -p '{"spec": {"template": {"spec": {"securityContext": {"fsGroup":1000}}}}}'

This is because the default group is 1000 (I used golang's os/user to find it). I would prefer to be able to set that from func.yaml, or have that set automatically to some sane default.

current func.yaml
specVersion: 0.35.0
name: consumer
runtime: go
registry: <redacted>
image: <redacted>
imageDigest: <redacted>
created: 2023-12-13T00:39:05.888786906-05:00
build:
  builder: pack
run:
  volumes:
  - presistentVolumeClaim:
      claimName: knative-pc-cephfs
    path: /files
deploy:
  namespace: default
Here's the currently generated service:

You can see that no securityContext data is in the podspec.

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  annotations:
    dapr.io/app-id: consumer
    dapr.io/app-port: "8080"
    dapr.io/enable-api-logging: "true"
    dapr.io/enabled: "true"
    dapr.io/metrics-port: "9092"
    serving.knative.dev/creator: kubernetes-admin
    serving.knative.dev/lastModifier: kubernetes-admin
  creationTimestamp: "2023-12-20T05:20:10Z"
  generation: 1
  labels:
    boson.dev/function: "true"
    boson.dev/runtime: go
    function.knative.dev: "true"
    function.knative.dev/name: consumer
    function.knative.dev/runtime: go
  name: consumer
  namespace: default
  resourceVersion: "11510806"
  uid: ...
spec:
  template:
    metadata:
      annotations:
        dapr.io/app-id: consumer
        dapr.io/app-port: "8080"
        dapr.io/enable-api-logging: "true"
        dapr.io/enabled: "true"
        dapr.io/metrics-port: "9092"
      creationTimestamp: null
      labels:
        boson.dev/function: "true"
        boson.dev/runtime: go
        function.knative.dev: "true"
        function.knative.dev/name: consumer
        function.knative.dev/runtime: go
    spec:
      containerConcurrency: 0
      containers:
      - env:
        - name: BUILT
          value: 20231220T002010
        - name: ADDRESS
          value: 0.0.0.0
        image: <redacted>
        livenessProbe:
          httpGet:
            path: /health/liveness
            port: 0
        name: user-container
        readinessProbe:
          httpGet:
            path: /health/readiness
            port: 0
          successThreshold: 1
        resources: {}
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          runAsNonRoot: true
          seccompProfile:
            type: RuntimeDefault
        volumeMounts:
        - mountPath: /files
          name: pvc-knative-pc-cephfs
      enableServiceLinks: false
      timeoutSeconds: 300
      volumes:
      - name: pvc-knative-pc-cephfs
        persistentVolumeClaim:
          claimName: knative-pc-cephfs
  traffic:
  - latestRevision: true
    percent: 100
@lkingland
Copy link
Member

Agreed on both suggestions. I'll add this to our roadmap as a "ready to work" item.

@lkingland lkingland added the kind/good-first-issue Denotes an issue ready for a new contributor. label Jan 9, 2024
@gouthamhusky
Copy link

Hello @zalsader! I'd like to work on this issue. Could you share some more context about how the func.yaml file is leveraged to generate the service definition? Perhaps pointing to the source code files would help too! This is my first issue and would love to contribute here!

@zalsader
Copy link
Contributor Author

zalsader commented Jan 11, 2024

@gouthamhusky Sure, yeah. Looking at existing closed PRs is a good first step. For example, my first contribution to the library was to add pvc to the allowed volume options. I am happy to review any PRs you create. I'm not a maintainer, so those people could provide more guidance.

@Sanket-0510
Copy link
Contributor

Sanket-0510 commented Feb 18, 2024

hey @zalsader after digging into this issue I found out that the podSecurityContext is getting set from here -

SecurityContext: defaultPodSecurityContext(),

and the DefaultPodSecurityContext is defined here -

func defaultPodSecurityContext() *corev1.PodSecurityContext {

func defaultPodSecurityContext() *corev1.PodSecurityContext {
	// change ownership of the mounted volume to the first non-root user uid=1000
	if IsOpenShift() {
		return nil
	}
	runAsUser := int64(1001)
	runAsGroup := int64(1002)
	return &corev1.PodSecurityContext{
		RunAsUser:  &runAsUser,
		RunAsGroup: &runAsGroup,
		FSGroup:    &runAsGroup,
	}
}

if we have defined podSecurityContext then why is it not reflecting in service.yaml. like for containerSecurityContext

we do get the set fields like

securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          runAsNonRoot: true
          seccompProfile:
            type: RuntimeDefault

as passed using this function -

func defaultSecurityContext(client *kubernetes.Clientset) *corev1.SecurityContext {
	runAsNonRoot := true
	sc := &corev1.SecurityContext{
		Privileged:               new(bool),
		AllowPrivilegeEscalation: new(bool),
		RunAsNonRoot:             &runAsNonRoot,
		Capabilities:             &corev1.Capabilities{Drop: []corev1.Capability{"ALL"}},
		SeccompProfile:           nil,
	}
	if info, err := client.ServerVersion(); err == nil {
		var v *semver.Version
		v, err = semver.NewVersion(info.String())
		if err == nil && v.Compare(oneTwentyFour) >= 0 {
			sc.SeccompProfile = &corev1.SeccompProfile{Type: corev1.SeccompProfileTypeRuntimeDefault}
		}
	}
	return sc
}

@Sanket-0510
Copy link
Contributor

Sanket-0510 commented Feb 18, 2024

still if we want to set the podSecurityContext explicitly will have to define new schema in func.yaml followed by creating new struct, setting the value of FSgroup using some function and passing it here -

SecurityContext: defaultPodSecurityContext(),

am I on right track? If yes it would be nice if you could assign this issue to me.

@Sanket-0510
Copy link
Contributor

lets just say if I define the property PodSecurityContext in func.yaml should it be in RunSpec or in DeploySpec I have question over that!

@Sanket-0510 Sanket-0510 linked a pull request Feb 20, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/good-first-issue Denotes an issue ready for a new contributor.
Projects
Status: 🔖 Next
Development

Successfully merging a pull request may close this issue.

4 participants