Skip to content

Running on a native Google (GCP) Kubernetes Clusters

jbradbury edited this page Aug 1, 2018 · 2 revisions

To run on a GCP native Kubernetes:

  • Create k8s cluster from Google console or through CLI
  • Go to the connect part on GCP console, it will provide a gcloud CLI command to setup kubectl to connect to the clusters
  • Create following cluster-binding-role for helm (kubectl create -f) <fileWithFollowingContent>:
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: tiller-clusterrolebinding
subjects:
- kind: ServiceAccount
  name: tiller
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: ""
  • helm init on a machine where you have previously added galaxy-helm-repo helm repository.
  • You might need:
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-nfs
spec:
  storageClassName: standard
  capacity:
    storage: 20Gi
    # volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Recycle
  nfs:
    path: /data
    server: singlefs-1-vm

  • Run our helm install process.
Clone this wiki locally