Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Defining a ReplicationDestination for the rclone mover causes issue if PVC never backed up/replicated #1122

Open
tssgery opened this issue Feb 16, 2024 · 1 comment
Labels
bug Something isn't working

Comments

@tssgery
Copy link

tssgery commented Feb 16, 2024

Using the rclone mover with an S3 backend, My deployment flow is as follows:

  1. Define ReplicationDestination
  2. Define PVC, with a dataSourceRef pointing to the ReplicationDestination
  3. Define Pod/Workload that uses PVC
  4. Define ReplicationSourcce for PVC

This works well, except for when I add a new PVC definition. When I do that, step "1" above, the rReplicationDestination never exits successfully as there is no permissions.facl file within S3. The code at

stat /tmp/permissions.facl
fails, causing the container to exit with a non-zero return code. K8s sees this, and reschedules the mover again and put it in an endless loop.

Here is an example manifest that exhibits the issue (note that the development/does-not-exist bucket/folder does not exist):

---
apiVersion: volsync.backube/v1alpha1
kind: ReplicationDestination
metadata:
  name: rclone-destination-test
spec:
  trigger:
    manual: "populate-me"
  rclone:
    #destinationPVC: example-pvc
    rcloneConfigSection: "minio"
    rcloneDestPath: "development/does-not-exist"
    rcloneConfig: volsync-rclone-secret
    copyMethod: Snapshot
    accessModes: [ReadWriteOnce]
    capacity: 10Gi
    storageClassName: ceph-block
    volumeSnapshotClassName: ceph-block
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: example-pvc
spec:
  dataSourceRef:
    kind: ReplicationDestination
    apiGroup: volsync.backube
    name: rclone-destination-test
  storageClassName: ceph-block
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: example-pod
spec:
  volumes:
    - name: example-pvc
      persistentVolumeClaim:
        claimName: example-pvc
  containers:
    - name: example-container
      image: nginx
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: example-pvc

Expected behavior
I was hoping that the rclone mover would detect that no files existed, the permissions.facl file not needed, and ignore it's absence

Actual results
Described above

@tssgery tssgery added the bug Something isn't working label Feb 16, 2024
@tssgery tssgery changed the title Defining a ReplicationDestination for the rclone mover causes issue if PVC never backked up/replicated Defining a ReplicationDestination for the rclone mover causes issue if PVC never backed up/replicated Feb 16, 2024
@tesshuflower
Copy link
Contributor

I think the problem is here we want there to be an error as no sync to the destination is succeeding (since this repo was never synced to). Other users may think their replications are succeeding when in fact they are doing nothing if we ignore these types of errors.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
Status: No status
Development

No branches or pull requests

2 participants