Replies: 4 comments 15 replies
-
The longhorn-velero-plugin is still under development (about 80% progress if I remember correctly). Have you tried velero-plugin-for-csi yet? |
Beta Was this translation helpful? Give feedback.
-
@jenting thank you for getting back, I did try it out with velero plugin for csi. Unfortunately it did not backup the volume. Citing: "time="2022-03-09T10:27:21Z" level=info msg="Waiting for volumesnapshotcontents snapcontent-0e6aa017-b55d-449c-a404-adf716e5a8d1 to have snapshot handle. Retrying in 5s" backup=velero-system/gitlab-backup2 cmd=/plugins/velero-plugin-for-csi logSource="/go/src/velero-plugin-for-csi/internal/util/util.go:182" pluginName=velero-plugin-for-csi" |
Beta Was this translation helpful? Give feedback.
-
I did a quick try with Velero 1.8 + Velero AWS plugin 1.4 + Velero CSI plugin 0.2. helm install velero \
--namespace=velero \
--create-namespace \
--set-file credentials.secretContents.cloud=credentials-velero \
--set configuration.provider=aws \
--set configuration.backupStorageLocation.name=default \
--set configuration.backupStorageLocation.bucket=velero \
--set configuration.backupStorageLocation.config.region=minio-default \
--set configuration.backupStorageLocation.config.s3ForcePathStyle=true \
--set configuration.backupStorageLocation.config.s3Url=http://minio-default.velero.svc.cluster.local:9000 \
--set configuration.backupStorageLocation.config.publicUrl=http://localhost:9000 \
--set snapshotsEnabled=true \
--set configuration.volumeSnapshotLocation.name=default \
--set configuration.volumeSnapshotLocation.config.region=minio-default \
--set initContainers[0].name=velero-plugin-for-aws \
--set initContainers[0].image=velero/velero-plugin-for-aws:v1.4.0 \
--set initContainers[0].volumeMounts[0].mountPath=/target \
--set initContainers[0].volumeMounts[0].name=plugins \
--set configuration.features=EnableCSI \
--set initContainers[1].name=velero-plugin-for-csi \
--set initContainers[1].image=velero/velero-plugin-for-csi:v0.2.0 \
--set initContainers[1].volumeMounts[0].mountPath=/target \
--set initContainers[1].volumeMounts[0].name=plugins \
vmware-tanzu/velero By adding a label kind: VolumeSnapshotClass
apiVersion: snapshot.storage.k8s.io/v1beta1
metadata:
labels:
velero.io/csi-volumesnapshot-class: "true"
name: longhorn
driver: driver.longhorn.io
deletionPolicy: Delete and follow the example in https://velero.io/blog/csi-integration/ by deploying the application apiVersion: v1
kind: Namespace
metadata:
creationTimestamp: null
name: csi-app
---
kind: Pod
apiVersion: v1
metadata:
namespace: csi-app
name: csi-nginx
spec:
nodeSelector:
kubernetes.io/os: linux
containers:
- image: nginx
name: nginx
command: [ "sleep", "1000000" ]
volumeMounts:
- name: longhorndisk01
mountPath: "/mnt/longhorndisk"
volumes:
- name: longhorndisk01
persistentVolumeClaim:
claimName: pvc-longhorndisk
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: csi-app
name: pvc-longhorndisk
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: longhorn
--- Write random data to the mount point $ kubectl -n csi-app exec -ti csi-nginx bash
# while true; do echo -n "FOOBARBAZ " >> /mnt/longhorndisk/foobar; done
^C I was able to back up without any error message regards to velero backup create csi-b2 --include-namespaces csi-app --wait |
Beta Was this translation helpful? Give feedback.
-
@jenting thank you once again for testing this out. The only difference I had is that the volumeSnapshotClass had the api version set to snapshot.storage.k8s.io/v1 instead of v1beta1. I'm not sure if that was the culprit here but it's working now. Using CSI I think it's expected that the backup goes to the backup destination of Longhorn and not that of Velero correct? |
Beta Was this translation helpful? Give feedback.
-
Question
Has anyone had any success in conducting application consistent backups with Velero & Longhorn CSI snapshots?
Environment
Longhorn version: v1.2.3
Installation method (e.g. Rancher Catalog App/Helm/Kubectl): Kubectl
Kubernetes distro (e.g. RKE/K3s/EKS/OpenShift) and version: K3s v1.22.5+k3s1
Number of management node in the cluster: 3
Number of worker node in the cluster: 2
Node config
OS type and version: Ubuntu 20.04 LTS/18.04 LTS.
CPU per node: 2 for Master Nodes, 8 for worker.
Memory per node: 4G for Master, 32G for worker.
Disk type(e.g. SSD/NVMe): 10k SAS disks.
Network bandwidth between the nodes: VMs are hosted on the same node within a VM cluster.
Underlying Infrastructure (e.g. on AWS/GCE, EKS/GKE, VMWare/KVM, Baremetal): VMware.
Number of Longhorn volumes in the cluster: 12
Additional context
As Longorn has a limitation of 1 single backup destination, we were looking at having additional backups using Velero. In our case Longhorn backups are intended for DR purposes, going to a cloud provider. However at our local DC we'd like to have local backups available, in order not to trigger a DR processes should we require to restore an application. Velero seemed like a good option to trigger backups and store them locally. We managed to get this working using Restic but for that to be consistent we need to freeze the volume. Given that there is a Longhorn velero plugin (https://github.com/ecatlabs/longhorn-velero-plugin) and CSI snapshot support, I believe there are cleaner ways than using Restic.
Using Longhorn Velero plugin, I was only able to trigger a Snapshot but the volume wasn't backed up. Using CSI Snapshots, a snapshot is created but the backup fails with "time="2022-03-09T10:27:21Z" level=info msg="Waiting for volumesnapshotcontents snapcontent-0e6aa017-b55d-449c-a404-adf716e5a8d1 to have snapshot handle. Retrying in 5s" backup=velero-system/gitlab-backup2 cmd=/plugins/velero-plugin-for-csi logSource="/go/src/velero-plugin-for-csi/internal/util/util.go:182" pluginName=velero-plugin-for-csi"
Beta Was this translation helpful? Give feedback.
All reactions