Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Xen affinity rule #153

Open
lavih opened this issue Jun 15, 2021 · 7 comments
Open

Xen affinity rule #153

lavih opened this issue Jun 15, 2021 · 7 comments

Comments

@lavih
Copy link

lavih commented Jun 15, 2021

Hello Dom,
will it be possibe to add an anti-affinity-rule like on vmware_vsphere that is documented over here:
https://registry.terraform.io/providers/hashicorp/vsphere/latest/docs/resources/compute_cluster_vm_anti_affinity_rule
The ability to deploy via TF the xen vms on different hosts and create an HA environment will greatly help us !
Thanks :)
Lavih

@olivierlambert
Copy link
Member

We have the antiaffinity rule in Xen Orchestra via the load balancer plugin (see https://xen-orchestra.com/blog/xen-orchestra-5-57/#antiaffinityinloadbalancer). @julien-f can we use XO API to talk to the plugin directly and setup what we need?

@ddelnano
Copy link
Collaborator

@lavih apologies for the late reply. I was on vacation the past week. I will talk to @julien-f and see if that can be leveraged here.

@julien-f
Copy link
Member

The only way to talk to the plugin is to use the plugin.configure method.

If more interactivity is needed, we can update the plugin to expose specific API methods.

@ddelnano
Copy link
Collaborator

I synced up with @julien-f about this. It would be possible to create a terraform resource for the load balancer plugin to accomplish this re-balancing. That plugin is only available for customers with the XO premium support.

I'll look into implementing this in more detail, but it likely won't happen for a week or two.

@lavih
Copy link
Author

lavih commented Jun 29, 2021

That would mean I have to use this XO plugin with specific version and also be a XO premium member? what if im neither?
What if it would be something more simple like check the number of hosts and scatter the new vms more or less equally?

@olivierlambert
Copy link
Member

olivierlambert commented Jun 29, 2021

That's a bit more complicated that writing something "scattering new VMs more or less equally".

Otherwise, it wouldn't be a dedicated plugin with hundreds lines of code. But it's Open Source, so you can install the plugin yourself if you installed it from the sources: https://github.com/vatesfr/xen-orchestra/tree/master/packages/xo-server-load-balancer

If you are using XOA (the turnkey virtual appliance), then you need the support plan coming with it, meaning XOA Premium.

Also I suppose that if you use Xen Orchestra at scale with XCP-ng or XenServer, having official support isn't a bad idea (since there's no "one shot support"). It also helps the project to grow and get more people working on it.

@ddelnano
Copy link
Collaborator

What if it would be something more simple like check the number of hosts and scatter the new vms more or less equally?

Terraform providers are better as glue between the terraform code and the API providing the functionality. Encoding detailed scheduling logic, adds complexity that is best done in the API (like the xo-server-load-balancer). This typically causes inconsistent and constant diffs during terraform plans

If you want to have more control over the scheduling, without installing the xo-server-load-balancer, and don't mind a little bit of upfront work, you could do something like this. Note: this is code is untested.

# Add a tag to all hosts you want to spread the VMs across
# Note: this functionality isn't available via the terraform provider, but it could be.
# The XO UI or xo-cli tag.add id=<id> tag=service_x would suffice in the meantime

variable "max_service_x_per_host" {
  default = 3
}

data "xenorchestra_pool" "pool" {
  name_label = "Your pool"
}

data "xenorchestra_hosts" "service_x" {
  pool_id = data.xenorchestra_pool.pool.id

  sort_by = "name_label"
  sort_order = "asc"
  
  # sort by the hosts that have the tag indicating that they should run service_x
  tags = [
    "service_x",
  ]
}

resource "xenorchestra_vm" "service_x_vm" {
  count = var.max_service_x_per_host
  for_each = toset(data.xenorchestra_hosts.service_x)

  affinity_host = each.value.id
 
  # Add all your normal attributes here
  ...
  ...
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants