Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

design(hld): high level proposal for moveEngine CR #22

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

vishnuitta
Copy link
Contributor

This PR have proposal for high level flow of moveEngine controller.
This allows contributors to fill in the TODOs of the HLD.

Signed-off-by: Vitta vitta@mayadata.io

Signed-off-by: Vitta <vitta@mayadata.io>
- schedule time to transfer data at regular interval
- Telemetry management

`MoveEngine` controller that is watching this CR will deploy the KubeMove framework(`MoveEngine`, `DataSync` controller) at remote cluster to

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By deploy do you mean, MVEC will install the kubemove operator on the remote cluster or it will just create CRs assuming the operator is pre-deployed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This KEP-2 is specific to MoveEngine CR, which is one of various CRs of KubeMove.
Summary content till the spec is copied from KEP-1. It needs correction. Will do the same and update this PR.

- Telemetry management

`MoveEngine` controller that is watching this CR will deploy the KubeMove framework(`MoveEngine`, `DataSync` controller) at remote cluster to
setup the application and transfer the data to remote cluster. This controller creates the datasync CR periodically and invokes the `DataSync`

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This controller creates the datasync CR periodically

in the source or destination cluster?


`MoveEngine` controller that is watching this CR will deploy the KubeMove framework(`MoveEngine`, `DataSync` controller) at remote cluster to
setup the application and transfer the data to remote cluster. This controller creates the datasync CR periodically and invokes the `DataSync`
controller to initiate the data transfer through DDM. Once DDM completes the data transfer, `MoveEngine` controller updates the remote `DataSync` controller by creating dataSync CR at remote cluster to validate/verify the data transfer, and to perform the restore of data.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

by creating dataSync CR at remote cluster - bit confused here. Is it the same DS cr as before?


`MoveEngine` controller that is watching this CR will deploy the KubeMove framework(`MoveEngine`, `DataSync` controller) at remote cluster to
setup the application and transfer the data to remote cluster. This controller creates the datasync CR periodically and invokes the `DataSync`
controller to initiate the data transfer through DDM. Once DDM completes the data transfer, `MoveEngine` controller updates the remote `DataSync` controller by creating dataSync CR at remote cluster to validate/verify the data transfer, and to perform the restore of data.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Once DDM completes the data transfer

Does it mean one "run" of the periodic sync?


## Proposal

`spec` of MoveEngine CR consists of following fields:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This "Move" operator could use human-in-the-loop model for Approve/Deny. It can be similar to CertificateSigningRequest in k8s. https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/#create-a-certificate-signing-request-object-to-send-to-the-kubernetes-api

In that case MoveRequest could the more appropriate.

remoteNamespace: ns2

#label selectors
selectors:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From our Stash experience, we found that it was better to use 1-1 mapping for MoveEngine CR and k8s workload.

In Stash v1alpha1, we use label selector based approach. It caused a number of problems:

  • When do you resolve the labels? This causes a lot of complexity and confusing for the user.
  • What if different types of apps have the same labels?
  • When keeping track of .status, it is easier to have one workload to deal with instead of dynamic number of workloads.
  • Also, users want to move one app at a time. So, separate CRs are easier.

So, in our Stash V1beta1, we switched to use ref instead labels selectors.https://github.com/stashed/stash/blob/master/apis/stash/v1beta1/types.go#L46-L88

Later you can introduce a MoveEngineBlueprint type non-namespaced CR so that if users have multiple workloads that need moving they can use the MoveEngineBlueprint as a template.

- app.example.com=test

# sync Period
syncPeriod: */5 * * * *

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When does it end syncing?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants