Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Volume Mount Issue in health-check Deployment on Azure AKS #136

Open
intrusus-dev opened this issue Aug 31, 2023 · 0 comments
Open

Volume Mount Issue in health-check Deployment on Azure AKS #136

intrusus-dev opened this issue Aug 31, 2023 · 0 comments

Comments

@intrusus-dev
Copy link

Environment:

Platform: Azure AKS
Affected Scenario: scenarios/health-check/deployment.yaml
Kubernetes Goat Version: v2.2.0

Issue Description:

During the deployment of the Kubernetes Goat's "health-check" scenario on Azure AKS, I encountered a volume mount issue that prevented the pod from transitioning out of the "ContainerCreating" state. This issue appears to be specific to the Azure AKS platform.

Symptoms:

The "health-check" pod remained stuck in the "ContainerCreating" state.
Logs showed errors related to mounting volumes, specifically the docker-sock-volume.

 kubernetes-goat % kubectl describe pod health-check-deployment-59f4b679b-zwlb6 
Name:             health-check-deployment-59f4b679b-zwlb6
Namespace:        default
Priority:         0
Service Account:  default
Node:             aks-nodepool1-19398891-vmss000001/10.0.5.33
Start Time:       Thu, 31 Aug 2023 10:56:44 +0200
Labels:           app=health-check
                  pod-template-hash=59f4b679b
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/health-check-deployment-59f4b679b
Containers:
  health-check:
    Container ID:   
    Image:          madhuakula/k8s-goat-health-check
    Image ID:       
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     30m
      memory:  100Mi
    Requests:
      cpu:        30m
      memory:     100Mi
    Environment:  <none>
    Mounts:
      /custom/docker/docker.sock from docker-sock-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9999x (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  docker-sock-volume:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/docker.sock
    HostPathType:  Socket
  kube-api-access-9999x:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Guaranteed
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason       Age                  From               Message
  ----     ------       ----                 ----               -------
  Normal   Scheduled    14m                  default-scheduler  Successfully assigned default/health-check-deployment-59f4b679b-zwlb6 to aks-nodepool1-19398891-vmss000001
  Warning  FailedMount  3m28s (x2 over 12m)  kubelet            Unable to attach or mount volumes: unmounted volumes=[docker-sock-volume], unattached volumes=[docker-sock-volume kube-api-access-9999x]: timed out waiting for the condition
  Warning  FailedMount  71s (x4 over 10m)    kubelet            Unable to attach or mount volumes: unmounted volumes=[docker-sock-volume], unattached volumes=[kube-api-access-9999x docker-sock-volume]: timed out waiting for the condition
  Warning  FailedMount  12s (x15 over 14m)   kubelet            MountVolume.SetUp failed for volume "docker-sock-volume" : hostPath type check failed: /var/run/docker.sock is not a socket file     

Troubleshooting:

  • Reviewed the pod's events and logs for insights into the issue.
  • Checked the Docker socket path and verified the hostPath configuration in deployment.yaml.
  • Investigated node-level conditions and permissions.

Resolution:

The issue was due to an incorrect type used for the docker-sock-volume hostPath. I updated the type to DirectoryOrCreate, which resolved the volume mount problem.

volumes:
  - name: docker-sock-volume
    hostPath:
      path: /var/run/docker.sock
      type: DirectoryOrCreate

Steps to Reproduce:

  1. Deploy Kubernetes Goat on Azure AKS using the provided setup-kubernetes-goat.sh which refers to the current scenarios/health-check/deployment.yaml.
  2. Observe the pod's behavior, specifically the "health-check" scenario by using kubectl get pods

Expected Behavior:

The "health-check" pod should transition from the "ContainerCreating" state to the "Running" state without any volume mount issues.

Recommendation:

Developers are advised to use the corrected type for the docker-sock-volume hostPath in the deployment.yaml file to ensure successful deployment on Azure AKS. To ensure compatibility, it's recommended to test the deployment specifically on Azure AKS, as this issue might not surface on other platforms.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant