Skip to content
This repository has been archived by the owner on Jul 7, 2020. It is now read-only.

glusterFS pod deploy failing #639

Open
sunielm7 opened this issue Feb 5, 2020 · 0 comments
Open

glusterFS pod deploy failing #639

sunielm7 opened this issue Feb 5, 2020 · 0 comments

Comments

@sunielm7
Copy link

sunielm7 commented Feb 5, 2020

I am trying to deploy glusterfs on k8s cluster deployed on Ubuntu 18.04. I believe I followed all the prerequisites as mentioned here : https://github.com/gluster/gluster-kubernetes but getting an error sayings pods not found after running ./gk-deploy script

./gk-deploy -g --admin-key ubuntu --user-key xxxxx
Welcome to the deployment tool for GlusterFS on Kubernetes and OpenShift.

Before getting started, this script has some requirements of the execution
environment and of the container platform that you should verify.

The client machine that will run this script must have:

  • Administrative access to an existing Kubernetes or OpenShift cluster
  • Access to a python interpreter 'python'

Each of the nodes that will host GlusterFS must also have appropriate firewall
rules for the required GlusterFS ports:

  • 2222 - sshd (if running GlusterFS in a pod)
  • 24007 - GlusterFS Management
  • 24008 - GlusterFS RDMA
  • 49152 to 49251 - Each brick for every volume on the host requires its own
    port. For every new brick, one new port will be used starting at 49152. We
    recommend a default range of 49152-49251 on each host, though you can adjust
    this to fit your needs.

The following kernel modules must be loaded:

  • dm_snapshot
  • dm_mirror
  • dm_thin_pool

For systems with SELinux, the following settings need to be considered:

  • virt_sandbox_use_fusefs should be enabled on each node to allow writing to
    remote GlusterFS volumes

In addition, for an OpenShift deployment you must:

  • Have 'cluster_admin' role on the administrative account doing the deployment
  • Add the 'default' and 'router' Service Accounts to the 'privileged' SCC
  • Have a router deployed that is configured to allow apps to access services
    running in the cluster

Do you wish to proceed with deployment?

[Y]es, [N]o? [Default: Y]: Y
Using Kubernetes CLI.
Using namespace "default".
Checking for pre-existing resources...
GlusterFS pods ... not found.
deploy-heketi pod ... not found.
heketi pod ... not found.
gluster-s3 pod ... not found.
Creating initial resources ... serviceaccount/heketi-service-account created
clusterrolebinding.rbac.authorization.k8s.io/heketi-sa-view created
clusterrolebinding.rbac.authorization.k8s.io/heketi-sa-view labeled
OK
node/work-node-1-nokia labeled
node/work-node-2-nokia labeled
node/work-node-3-nokia labeled
daemonset.extensions/glusterfs created
Waiting for GlusterFS pods to start ... pods not found.

Doing kubectl describe indicates probe check failed

kubectl describe pod glusterfs-9fsvx
Name: glusterfs-9fsvx
Namespace: default
Priority: 0
Node: work-node-3-nokia/192.168.101.36
Start Time: Tue, 04 Feb 2020 19:31:37 +0000
Labels: controller-revision-hash=7d855fc9fc
glusterfs=pod
glusterfs-node=pod
pod-template-generation=1
Annotations:
Status: Running
IP: 192.168.101.36
Controlled By: DaemonSet/glusterfs
Containers:
glusterfs:
Container ID: docker://e4020c5b63529c1722b84620f5ce878d2d25f40430c8146d6340b4b87f9c08b1
Image: gluster/gluster-centos:latest
Image ID: docker-pullable://gluster/gluster-centos@sha256:8167034b9abf2d16581f3f4571507ce7d716fb58b927d7627ef72264f802e908
Port:
Host Port:
State: Running
Started: Tue, 04 Feb 2020 19:31:53 +0000
Ready: False
Restart Count: 0
Requests:
cpu: 100m
memory: 100Mi
Liveness: exec [/bin/bash -c if command -v /usr/local/bin/status-probe.sh; then /usr/local/bin/status-probe.sh liveness; else systemctl status glusterd.service; fi] delay=40s timeout=3s period=25s #success=1 #failure=50
Readiness: exec [/bin/bash -c if command -v /usr/local/bin/status-probe.sh; then /usr/local/bin/status-probe.sh readiness; else systemctl status glusterd.service; fi] delay=40s timeout=3s period=25s #success=1 #failure=50
Environment:
HOST_DEV_DIR: /mnt/host-dev
GLUSTER_BLOCKD_STATUS_PROBE_ENABLE: 1
GB_GLFS_LRU_COUNT: 15
TCMU_LOGDIR: /var/log/glusterfs/gluster-block
Mounts:
/etc/glusterfs from glusterfs-etc (rw)
/etc/ssl from glusterfs-ssl (ro)
/lib/modules from kernel-modules (ro)
/mnt/host-dev from glusterfs-host-dev (rw)
/run from glusterfs-run (rw)
/run/lvm from glusterfs-lvm (rw)
/sys/class from glusterfs-block-sys-class (rw)
/sys/fs/cgroup from glusterfs-cgroup (ro)
/sys/module from glusterfs-block-sys-module (rw)
/var/lib/glusterd from glusterfs-config (rw)
/var/lib/heketi from glusterfs-heketi (rw)
/var/lib/misc/glusterfsd from glusterfs-misc (rw)
/var/log/glusterfs from glusterfs-logs (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hh46k (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
glusterfs-heketi:
Type: HostPath (bare host directory volume)
Path: /var/lib/heketi
HostPathType:
glusterfs-run:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit:
glusterfs-lvm:
Type: HostPath (bare host directory volume)
Path: /run/lvm
HostPathType:
glusterfs-etc:
Type: HostPath (bare host directory volume)
Path: /etc/glusterfs
HostPathType:
glusterfs-logs:
Type: HostPath (bare host directory volume)
Path: /var/log/glusterfs
HostPathType:
glusterfs-config:
Type: HostPath (bare host directory volume)
Path: /var/lib/glusterd
HostPathType:
glusterfs-host-dev:
Type: HostPath (bare host directory volume)
Path: /dev
HostPathType:
glusterfs-misc:
Type: HostPath (bare host directory volume)
Path: /var/lib/misc/glusterfsd
HostPathType:
glusterfs-block-sys-class:
Type: HostPath (bare host directory volume)
Path: /sys/class
HostPathType:
glusterfs-block-sys-module:
Type: HostPath (bare host directory volume)
Path: /sys/module
HostPathType:
glusterfs-cgroup:
Type: HostPath (bare host directory volume)
Path: /sys/fs/cgroup
HostPathType:
glusterfs-ssl:
Type: HostPath (bare host directory volume)
Path: /etc/ssl
HostPathType:
kernel-modules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
HostPathType:
default-token-hh46k:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hh46k
Optional: false
QoS Class: Burstable
Node-Selectors: storagenode=glusterfs
Tolerations: node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/network-unavailable:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/pid-pressure:NoSchedule
node.kubernetes.io/unreachable:NoExecute
node.kubernetes.io/unschedulable:NoSchedule
Events:
Type Reason Age From Message


Normal Scheduled 9m default-scheduler Successfully assigned default/glusterfs-9fsvx to work-node-3-nokia
Normal Pulling 8m59s kubelet, work-node-3-nokia Pulling image "gluster/gluster-centos:latest"
Normal Pulled 8m46s kubelet, work-node-3-nokia Successfully pulled image "gluster/gluster-centos:latest"
Normal Created 8m45s kubelet, work-node-3-nokia Created container glusterfs
Normal Started 8m44s kubelet, work-node-3-nokia Started container glusterfs
Warning Unhealthy 3m47s (x11 over 7m57s) kubelet, work-node-3-nokia Readiness probe failed: /usr/local/bin/status-probe.sh
failed check: systemctl -q is-active glusterd.service
Warning Unhealthy 3m42s (x11 over 7m52s) kubelet, work-node-3-nokia Liveness probe failed: /usr/local/bin/status-probe.sh
failed check: systemctl -q is-active glusterd.service

I followed some of the steps mentioned here #341; especially running gk-deploy abort and deleting gluster files, but that ddnt help . glusterd isnt running on nodes now.
Kubernetes version is 1.15.2 and is deployed using cluster APIs.
Has anyone seen this before?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant