Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exit Code missing for failed and succeeded pods #417

Open
alexjebens opened this issue Jan 16, 2023 · 4 comments
Open

Exit Code missing for failed and succeeded pods #417

alexjebens opened this issue Jan 16, 2023 · 4 comments

Comments

@alexjebens
Copy link

Describe the Issue
kubectl describe pod does not contain the container exit codes both for failed and succeeded pod status.

Steps To Reproduce
run a container that performs a long running job like azcopy.

Expected behavior
The display of the exit code. Particularly in the context of ACI the OOMKill exit code is highly relevant.

Virtual-kubelet version
1.4.5

Kubernetes version
1.23.8

Additional context

@alexjebens alexjebens changed the title Exit codes missing for failed and succeeded pods Information missing for failed and succeeded pods Jan 26, 2023
@alexjebens alexjebens changed the title Information missing for failed and succeeded pods Exit Code missing for failed and succeeded pods Jan 26, 2023
@helayoty
Copy link
Member

helayoty commented Feb 1, 2023

@alexjebens Would you please upgrade to newer versions? > v1.4.8.
Also, would you please provide the output you're getting?

@alexjebens
Copy link
Author

Is there a supported way to update this in an AKS Cluster where this was installed using addons not Helm?

@alexjebens
Copy link
Author

Here is some output. These are the same pod definitions just with different tolerations for aci.

Note the difference in Containers.azbackup.State.

kubectl describe pod using ACI:

Name:             azbackup-fs-cron-27914490-hs4tj
Namespace:        default
Priority:         0
Service Account:  default
Node:             virtual-node-aci-linux/10.1.0.53
Labels:           controller-uid=03ed794b-bce5-4e8d-93c5-4224caedeeed
                  job-name=azbackup-fs-cron-27914490
Annotations:      <none>
Status:           Succeeded
IP:               10.241.0.5
IPs:
  IP:           10.241.0.5
Controlled By:  Job/azbackup-fs-cron-27914490
Containers:
  azbackup:
    Container ID:   aci://09c6ae6112eb0809679e91fb98a42598fadafa381e1dcbf48f2fc4736863ccc7
    Image:          <redacted>
    Image ID: <redacted>
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       Terminated
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:     4
      memory:  8G
    Environment Variables from:
      <redacted>
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5ttxn (ro)
Conditions:
  Type           Status
  Ready          False
  Initialized    True
  PodScheduled   True
Volumes:
  kube-api-access-5ttxn:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              beta.kubernetes.io/os=linux
                             kubernetes.io/role=agent
                             type=virtual-kubelet
Tolerations:                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
                             virtual-kubelet.io/provider op=Exists
Events:                      <none>

kubectl describe pod without ACI:

Name:             azbackup-cron-27918810-vvp8g
Namespace:        <redacted>
Priority:         0
Service Account:  default
Node:             aks-agentpool-41195700-vmss000001/10.1.0.33
Start Time:       Tue, 31 Jan 2023 02:30:00 +0100
Labels:           controller-uid=0652cdcc-7f73-4954-87cb-fd7c2c83112f
                  job-name=azbackup-cron-27918810
Annotations:      <none>
Status:           Succeeded
IP:               10.1.0.56
IPs:
  IP:           10.1.0.56
Controlled By:  Job/azbackup-cron-27918810
Containers:
  azbackup:
    Container ID:   containerd://55f199a7b75356f06681190ff055b33af90226a92bb75eca2674b567d7debcb5
    Image:          <redacted>
    Image ID:       <redacted>
    Port:           <none>
    Host Port:      <none>
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 31 Jan 2023 02:30:04 +0100
      Finished:     Tue, 31 Jan 2023 02:30:37 +0100
    Ready:          False
    Restart Count:  0
    Environment Variables from:
      <redacted>
    Environment: <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tc4cf (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  kube-api-access-tc4cf:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>

@helayoty
Copy link
Member

helayoty commented Feb 3, 2023

Is there a supported way to update this in an AKS Cluster where this was installed using addons, not Helm?

@alexjebens The current AKS virtual node addon version is 1.4.8. The addon version will be updated automatically once we have a new release. It might be a little delay between OSS and AKS versions (a few weeks) until the new AKS release is rolled out.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants