Skip to content

Releases: cloud-hypervisor/cloud-hypervisor

v21.1

11 Mar 09:08
Compare
Choose a tag to compare

This is a bug fix release. The following issues have been addressed:

  • Missing openat() syscall from seccomp filter (#3609)
  • Ensure MMIO/PIO exits complete before pausing (#3658)
  • Support DWORD writes to MSI-X control register (#3714)
  • VFIO ioctl reordering to fix MSI on AMD platforms (#3827)
  • Fix virtio-net control queue (#3829)

v22.0

03 Mar 18:10
Compare
Choose a tag to compare

This release has been tracked through the v22.0
project
.

GDB Debug Stub Support

Cloud Hypervisor can now be used as debug target with GDB. This is controlled
by the gdb compile time feature and details of how to use it can be found in
the gdb
documentation
.

virtio-iommu Backed Segments

In order to facilitate hotplug devices that require being behind an IOMMU (e.g.
QAT) there is a new option --platform iommu_segments=<list_of_segments> that
will place all the specified segments behind the IOMMU.

Before Boot Configuration Changes

It is now possible to change the VM configuration (e.g. add or remove devices,
resize) before the VM is booted.

virtio-balloon Free Page Reporting

If --balloon free_page_reporting=on is used then the guest can report pages
that is it not using to the VMM. The VMM will then notify the host OS that
those pages are no longer in use and can be freed. This can result in improved
memory density.

Support for Direct Kernel Booting with TDX

Through the use of TD-Shim lightweight firmware it is now possible to
directly boot into the kernel with TDX. The TDX
documentation

has been updated for this usage.

PMU Support for AArch64

A PMU is now available on AArch64 for guest performance profiling. This will be
exposed automatically if available from the host.

Documentation Under CC-BY-4.0 License

The documentation is now licensed under the "Creative Commons Attribution 4.0
International" license which is aligned with the project charter under the
Linux Foundation.

Deprecation of "Classic" virtiofsd

The use of the Rust based virtiofsd
is now recommended and we are no longer testing against the C based "classic"
version.

Notable Bug Fixes

  • Can now be used on kernels without AF_INET support (#3785)
  • virtio-balloon size is now validated against guest RAM size (#3689)
  • Ensure that I/O related KVM VM Exits are correctly handled (#3677)
  • Multiple TAP file descriptors can be used for virtio-net device hotplug (#3607)
  • Minor API improvements and fixes (#3756, #3766, #3647, #3578)
  • Fix sporadic seccomp violation from glibc memory freeing (#3610, #3609)
  • Fix Windows 11 on AArch64 due to wider MSI-X register accesses (#3714, #3720)
  • Ensure vhost-user features are correct across migration (#3737)
  • Improved vCPU topology on AArch64 (#3735, #3733)

Contributors

Many thanks to everyone who has contributed to our release:

v21.0

20 Jan 15:31
Compare
Choose a tag to compare

This release has been tracked through the v21.0 project.

Efficient Local Live Migration (for Live Upgrade)

In order to support fast live upgrade of the VMM an optimised path has been added in which the memory for the VM is not compared from source to destination. This is activated by passing --local to the ch-remote send-migration command. This means that the live upgrade can complete in the order of 50ms vs 3s. (#3566)

Recommended Kernel is Now 5.15

Due to an issue in the virtio-net code in 5.14 the recommended Linux kernel is now 5.15. (#3530)

Notable Bug fixes

  • Multiple fixes were made to the OpenAPI YAML file to match the implementaion (#3555,#3562)
  • Avoid live migration deadlock when triggered during the kernel boot (#3585)
  • Support live migration within firmware (#3586)
  • Validate the virtio-net desciptor chain (#3548)
  • direct=on (O_DIRECT) can now be used with a guest that makes unaligned accesses (e.g. firmware) (#3587)

Contributors

Many thanks to everyone who has contributed to our release:

v20.2

04 Jan 17:57
Compare
Choose a tag to compare

This is a bug fix release. The following issues have been addressed:

  • Don't error out when setting up the SIGWINCH handler (for console resize)
    when this fails due to older kernel (#3456)
  • Seccomp rules were refined to remove syscalls that are now unused
  • Fix reboot on older host kernels when SIGWINCH handler was not initialised
    (#3496)
  • Fix virtio-vsock blocking issue (#3497)

v20.1

13 Dec 15:06
Compare
Choose a tag to compare

This is a bug fix release. The following issues have been addressed:

  • Networking performance regression with virtio-net (#3450)
  • Limit file descriptors sent in vfio-user support (#3401)
  • Fully advertise PCI MMIO config regions in ACPI tables (#3432)
  • Set the TSS and KVM identity maps so they don't overlap with firmware RAM
  • Correctly update the DeviceTree on restore

v20.0

02 Dec 16:15
Compare
Choose a tag to compare

v20.0

This release has been tracked through the v20.0
project
.

Multiple PCI segments support

Cloud Hypervisor is no longer limited to 31 PCI devices. For both x86_64 and
aarch64 architectures, it is now possible to create up to 16 PCI segments,
increasing the total amount of supported PCI devices to 496.

CPU pinning

For each vCPU, the user can define a limited set of host CPUs on which it is
allowed to run. This can be useful when assigning a 1:1 mapping between host and
guest resources, or when running a VM on a specific NUMA node.

Improved VFIO support

Based on VFIO region capabilities, all regions can be memory mapped, limiting
the amount of triggered VM exits, and therefore increasing the performance of
the passthrough device.

Safer code

Several sections containing unsafe Rust code have been replaced with safe
alternatives, and multiple comments have been added to clarify why the remaining
unsafe sections are safe to use.

Extended documentation

The documentation related to VFIO has been updated while some new documents have
been introduced to cover the usage of --cpus parameter as well as how to run
Cloud Hypervisor on Intel TDX.

Notable bug fixes

  • Naturally align PCI BARs on relocation (#3244)
  • Fix panic in SIGWINCH listener thread when no seccomp filter set (#3338)
  • Use the tty raw mode implementation from libc (#3344)
  • Fix the emulation of register D for CMOS/RTC device (#3393)

Contributors

Many thanks to everyone who has contributed to our release:

v19.0

14 Oct 15:36
Compare
Choose a tag to compare

This release has been tracked through the v19.0 project.

Improved PTY handling for serial and virtio-console

The PTY support for serial has been enhanced with improved buffering when the
the PTY is not yet connected to. Using virtio-console with PTY now results in
the console being resized if the PTY window is also resized.

PCI boot time optimisations

Multiple optimisations have been made to the PCI handling resulting in
significant improvements in the boot time of the guest.

Improved TDX support

When using the latest TDVF firmware the ACPI tables created by the VMM are now
exposed via the firmware to the guest.

Live migration enhancements

Live migration support has been enhanced to support migration with virtio-mem
based memory hotplug and the virtio-balloon device now supports live
migration.

virtio-mem support with vfio-user

The use of vfio-user userspaces devices can now be used in conjunction with
virtio-mem based memory hotplug and unplug.

AArch64 for virtio-iommu

A paravirtualised IOMMU can now be used on the AArch64 platform.

Notable bug fixes

  • ACPI hotplugged memory is correctly restored after a live migration or
    snapshot/restore (#3165)
  • Multiple devices from the same IOMMU group can be passed through via VFIO
    (#3078 #3113)
  • Live migration with large blocks of memory was buggy due to an in issue in
    the underlying crate (#3157)

Contributors

Many thanks to everyone who has contributed to our release:

v18.0

09 Sep 14:31
Compare
Choose a tag to compare

This release has been tracked through the v18.0 project.

Experimental User Device (vfio-user) support

Experimental support for running PCI devices in userspace via vfio-user
has been included. This allows the use of the SPDK NVMe vfio-user controller
with Cloud Hypervisor. This is enabled by --user-device on the command line.

Migration support for vhost-user devices

Devices exposed into the VM via vhost-user can now be migrated using the live
migration support. This requires support from the backend however the commonly
used DPDK vhost-user backend does support this.

VHDX disk image support

Images using the VHDX disk image format can now be used with Cloud Hypervisor.

Device pass through on MSHV hypervisor

When running on the MSHV hypervisor it is possible to pass through devices from
the host through to the guest (e.g with --device)

AArch64 for support virtio-mem

The reference Linux kernel we recommend for using with Cloud Hypervisor now supports virtio-mem on AArch64.

Live migration on MSHV hypervisor

Live migration is now supported when running on the MSHV hypervisor including
efficient tracking of dirty pages.

AArch64 CPU topology support

The CPU topology (as configured through --cpu topology=) can now be
configured on AArch64 platforms and is conveyed through either ACPI or device
tree.

Power button support on AArch64

Use of the ACPI power button (e.g ch-remote --api-socket=<API socket> power-button)
is now supported when running on AArch64.

Notable bug fixes

  • Using two PTY outputs e.g. --serial pty --console pty now works correctly (#3012)
  • TTY input is now always sent to the correct destination (#3005)
  • The boot is no longer blocked when using a unattached PTY on the serial console (#3004)
  • Live migration is now supported on AArch64 (#3049)
  • Ensure signal handlers are run on the correct thread (#3069)

Contributors

Many thanks to everyone who has contributed to our release:

v17.0

22 Jul 16:31
Compare
Choose a tag to compare

This release has been tracked through the v17.0
project
.

ARM64 NUMA support using ACPI

The support for ACPI on ARM64 has been enhanced to include support for
specifying a NUMA configuration using the existing control options.

Seccomp support for MSHV backend

The seccomp rules have now been extended to support running against the MSHV
hypervisor backend.

Hotplug of macvtap devices

Hotplug of macvtap devices is now supported with the file descriptor for the
network device if opened by the user and passed to the VMM. The ch-remote
tool supports this functionality when adding a network device.

Improved SGX support

The SGX support has been updated to match the latest Linux kernel support and
now supports SGX provisioning and associating EPC sections to NUMA nodes.

Inflight tracking for vhost-user devices

Support for handling inflight tracking of I/O requests has been added to the
vhost-user devices allowing recovery after device reconnection.

Notable bug fixes

  • VFIO PCI BAR calculation code now correctly handles I/O BARs (#2821).
  • The VMM side of vhost-user devices no longer advertise the
    VIRTIO_F_RING_PACKED feature as they are not yet supported in the VMM
    (#2833).
  • On ARM64 VMs can be created with more than 16 vCPUs (#2763).

Contributors

Many thanks to everyone who has contributed to our release:

v16.0

10 Jun 16:13
Compare
Choose a tag to compare

This release has been tracked through the v16.0 project.

Improved live migration support

The live migration support inside Cloud Hypervisor has been improved with the addition of the tracking of dirty pages written by the VMM to complement the tracking of dirty pages made by the guest itself. Further the internal state of the VMM now is versioned which allows the safe migration of VMs from one version of the VMM to a newer one. However further testing is required so this should be done with care. See the live migration documentation for more details.

Improved vhost-user support

When using vhost-user to access devices implemented in different processes there is now support for reconnection of those devices in the case of a restart of the backend. In addition it is now possible to operate with the direction of the vhost-user-net connection reversed with the server in the VMM and the client in the backend. This is aligns with the default approach recommended by Open vSwitch.

ARM64 ACPI and UEFI support

Cloud Hypervisor now supports using ACPI and booting from a UEFI image on ARM64. This allows the use of stock OS images without direct kernel boot.

Notable bug fixes

  • Activating fewer virtio-net queues than advertised is now supported. This appeared when using OVMF with an MQ enabled device (#2578).
  • When using MQ with virtio devices Cloud Hypervisor now enforces a minimum vCPU count which ensures that the user will not see adverse guest performance (#2563).
  • The KVM clock is now correctly handled during live migration / snapshot & restore.

Removed functionality

The following formerly deprecated features have been removed:

  • Support for booting with the "LinuxBoot" protocol for ELF and bzImage
    binaries has been deprecated. When using direct boot users should configure
    their kernel with CONFIG_PVH=y.

Contributors

Many thanks to everyone who has contributed to our release including some new faces.