Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Node Area Replication for Redis Cluster on On-prem Kubernetes. #161

Open
vineelyalamarthy opened this issue Apr 4, 2023 · 1 comment
Open

Comments

@vineelyalamarthy
Copy link

Great work on the project PayU team.

I have couple of questions.

  1. Is Node Aware Replication already working with this code? Because this issue master and its corresponding replica are running on the same AZ #152
    is still in open state. I am thinking that Master and Slave being in the different failure domains is not guaranteed. Am I correct?
  2. Let's say I increase the number of leaders and replicas after the initial deployment , will the Redis Cluster automatically rebalance the slots to make sure that new leaders are owing slots as well?
  3. We are planning to have a three node on-prem Kubernetes cluster and have Redis Cluster deployed in such a way that master and slave lie in different nodes? Does this Redis Operator support that.
  4. Also since this is NOT a statefulset, does the PV and PVC mounting work well ? Let's say we want to have persistence for Redis Cluster and the data to be saved across restarts of the pods. I am referring here: https://medium.com/payu-engineering/why-we-built-our-own-k8s-redis-operator-part-1-1a8cdce92412

Thanks a lot in advance.

@shyimo
Copy link
Collaborator

shyimo commented Apr 19, 2023

Hi @vineelyalamarthy.
Thanks a lot for the feedback and sorry for the late response. it's holiday session now.

  1. Yes. it works well and was tested on production system
  2. Yes. this is done complete automatically by the operator
  3. Yes. this is supported out of the box without any special configuration and it was design this way.
  4. The PVC feature is the last part that was not implement yet. we hope to get it done by the end of 2023 / start of 2024.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants