You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have the mechanism to pass through cpu and memory annotations from kata-containers runtime via the pod spec, to select the instance size for some of the cloud provider (see
Note: Azure, AWS & IBMCloud, which also can have the instance profile type specified bia the machine type annotation, or kustomize, which isn't applicable for libvirt.
Currently libvirt doesn't have this option and I think that the vm size created is hard-coded in
# kcli info vm podvm-simple-test-530932ec
name: podvm-simple-test-530932ec
id: 8af3bdff-ce2e-423a-851c-19a8d9422cf1
status: up
autostart: False
plan:
cpus: 2
memory: 8192
net interface: eth0 mac: 52:54:00:88:83:c6 net: default type: routed
ip: 192.168.122.71
diskname: sda disksize: 6GB diskformat: sata type: qcow2 path: /var/lib/libvirt/images/podvm-simple-test-530932ec-root.qcow2
iso: /var/lib/libvirt/images/podvm-simple-test-530932ec-cloudinit.iso
This is a bit of an issue as a) it's not very flexible for workloads b) for dev scenarios it's a bit of a problem as two kcli cluster nodes take 4 vCPU and 6GB vRAM, so to run the most basic single peer pods test you need 10 vCPU and 20GB RAM. If we can add support for specifying instance size of the libvirt peer pod vms that will allow the flexiiblity that you get with the other cloud providers.
I don't think that libvirt has the concept of an instance profile/flavour like the other cloud providers, so we'd want to use the io.katacontainers.config.hypervisor.default_vcpus and io.katacontainers.config.hypervisor.default_memory annotations in the pod to drive the libvirt configuration, rather than using io.katacontainers.config.hypervisor.machine_type annotation like we do for some of the other platforms. I guess the default that we bake into the libvirt kustomize will need to be split into two fields as well for this?
The text was updated successfully, but these errors were encountered:
Is there a VM profile or flavour concept in libvirt like the cloud providers?
If not we should probably look at supporting the vcpu and memory annotations for libvirt provider to specify a non-default pod VM size
Is there a VM profile or flavour concept in libvirt like the cloud providers?
If not we should probably look at supporting the vcpu and memory annotations for libvirt provider to specify a non-default pod VM size
I'm not an expert, but I didn't see a profile/flavour concept when looking around, so I'll update the issue to be clearer about using annotations.
Is there a VM profile or flavour concept in libvirt like the cloud providers?
If not we should probably look at supporting the vcpu and memory annotations for libvirt provider to specify a non-default pod VM size
I'm not an expert, but I didn't see a profile/flavour concept when looking around, so I'll update the issue to be clearer about using annotations.
Hmm.. I will check as well. Nonetheless it'll be good to update the description and mention about the annotations as well. At least the current limitation of fixed pod VM size will be get fixed
stevenhorsman
changed the title
libvirt: Add podvm instance size/type support for libvirt
libvirt: Add podvm instance size support for libvirt
May 24, 2024
stevenhorsman
changed the title
libvirt: Add podvm instance size support for libvirt
libvirt: Add podvm instance cpu/mem size support for libvirt
May 24, 2024
We have the mechanism to pass through cpu and memory annotations from kata-containers runtime via the pod spec, to select the instance size for some of the cloud provider (see
cloud-api-adaptor/src/cloud-api-adaptor/pkg/adaptor/cloud/cloud.go
Lines 192 to 203 in a5e37f4
Currently libvirt doesn't have this option and I think that the vm size created is hard-coded in
cloud-api-adaptor/src/cloud-providers/libvirt/libvirt.go
Lines 532 to 533 in a5e37f4
This is a bit of an issue as a) it's not very flexible for workloads b) for dev scenarios it's a bit of a problem as two kcli cluster nodes take 4 vCPU and 6GB vRAM, so to run the most basic single peer pods test you need 10 vCPU and 20GB RAM. If we can add support for specifying instance size of the libvirt peer pod vms that will allow the flexiiblity that you get with the other cloud providers.
I don't think that libvirt has the concept of an instance profile/flavour like the other cloud providers, so we'd want to use the
io.katacontainers.config.hypervisor.default_vcpus
andio.katacontainers.config.hypervisor.default_memory
annotations in the pod to drive the libvirt configuration, rather than usingio.katacontainers.config.hypervisor.machine_type
annotation like we do for some of the other platforms. I guess the default that we bake into the libvirt kustomize will need to be split into two fields as well for this?The text was updated successfully, but these errors were encountered: