Skip to content
This repository has been archived by the owner on Dec 30, 2020. It is now read-only.

Using Singularity-CRI kills Kubernetes' coredns & dns-autoscaler pods, giving "Error: could not start container: unexpected container state: 4" #379

Open
adamwoolhether opened this issue Dec 11, 2020 · 0 comments
Labels
bug Something isn't working

Comments

@adamwoolhether
Copy link

What are the steps to reproduce this issue?

  1. Configure Kubernetes to use Singularity CRI

What happens?

Pods scheduled by coredns and dns-autoscaler deployments reflect a status of CrashLoopBackOff

Any logs, error output, comments, etc?

(If it’s long, please paste a link to the full output here.)
k logs -n kube-system dns-autoscaler-7d4cfd5f55-bh299

FATAL:   failed to apply security configuration: failed adding rule condition for syscall personality: two checks on same syscall argument

k describe pods -n kube-system dns-autoscaler-7d4cfd5f55-bh299

Name:                 dns-autoscaler-7d4cfd5f55-bh299
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Node:                 qpod3-cn01/10.108.16.113
Start Time:           Fri, 11 Dec 2020 01:30:30 -0500
Labels:               k8s-app=dns-autoscaler
                      pod-template-hash=7d4cfd5f55
Annotations:          scheduler.alpha.kubernetes.io/critical-pod:
                      seccomp.security.alpha.kubernetes.io/pod: docker/default
Status:               Running
IP:                   10.233.107.37
IPs:
  IP:           10.233.107.37
Controlled By:  ReplicaSet/dns-autoscaler-7d4cfd5f55
Containers:
  autoscaler:
    Container ID:  singularity://c4c921672d3025167a5edd27300de01aa675e5e6635e2340f4e709f05843a79e
    Image:         k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.6.0
    Image ID:      382615152b0ac8d8da7d628e4c34a0d0bb41cc0f6fd4efc69555365f8b96ff24
    Port:          <none>
    Host Port:     <none>
    Command:
      /cluster-proportional-autoscaler
      --namespace=kube-system
      --default-params={"linear":{"preventSinglePointFailure":true,"coresPerReplica":256,"nodesPerReplica":16,"min":2}}
      --logtostderr=true
      --v=2
      --configmap=dns-autoscaler
      --target=Deployment/coredns
    State:          Waiting
      Reason:       RunContainerError
    Last State:     Terminated
      Reason:       Error
      Message:      exited with code 255
      Exit Code:    255
      Started:      Wed, 31 Dec 1969 19:00:00 -0500
      Finished:     Fri, 11 Dec 2020 01:41:36 -0500
    Ready:          False
    Restart Count:  7
    Requests:
      cpu:        20m
      memory:     10Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from dns-autoscaler-token-42fc5 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  dns-autoscaler-token-42fc5:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  dns-autoscaler-token-42fc5
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  beta.kubernetes.io/os=linux
Tolerations:     CriticalAddonsOnly
                 node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                  From                 Message
  ----     ------     ----                 ----                 -------
  Normal   Scheduled  <unknown>            default-scheduler    Successfully assigned kube-system/dns-autoscaler-7d4cfd5f55-bh299 to qpod3-cn01
  Normal   Pulled     9m48s (x5 over 11m)  kubelet, qpod3-cn01  Container image "k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.6.0" already present on machine
  Normal   Created    9m48s (x5 over 11m)  kubelet, qpod3-cn01  Created container autoscaler
  Warning  Failed     9m47s (x5 over 11m)  kubelet, qpod3-cn01  Error: could not start container: unexpected container state: 4
  Warning  BackOff    69s (x47 over 11m)   kubelet, qpod3-cn01  Back-off restarting failed container

Similar errors received for coredns pods.

Environment?

OS distribution and version: CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64

go version: go version go1.14.12 linux/amd64

go env:

GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/root/.cache/go-build"
GOENV="/root/.config/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOINSECURE=""
GONOPROXY=""
GONOSUMDB=""
GOOS="linux"
GOPATH="/root/go"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/local/go"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
AR="ar"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build256281970=/tmp/go-build -gno-record-gcc-switches"

Singularity-CRI version: 1.0.0-beta.5

Singularity version: 3.7.0

Kubernetes version: 1.17.4

@adamwoolhether adamwoolhether added the bug Something isn't working label Dec 11, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant