Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory violation - VM crashes #21

Open
nrgaway opened this issue Sep 16, 2020 · 9 comments
Open

Memory violation - VM crashes #21

nrgaway opened this issue Sep 16, 2020 · 9 comments

Comments

@nrgaway
Copy link
Contributor

nrgaway commented Sep 16, 2020

After implementing the qubes-gui-agent configuration, scripts and executables within the VM, the virtual machine crashes during certain tasks (repeatable). I am not sure if its an issue with libkvmchand, QEMU, or maybe some misconfiguration on the host, but was hoping you have come across this beofre or would be able to provide some insight.

The virtual machine crashes after re-installing some packages (and some other scenarios) which always occur when executing a some, not all, qrexec processes. The system receives SIGNAL 6, which I understand indicates a memory violation and forces qemu-system-x86_64 to immediately terminate.

ANOM_ABEND auid=4294967295 uid=107 gid=107 ses=4294967295 pid=1165423 comm="qemu-system-x86" exe="/usr/bin/qemu-system-x86_64" sig=6 res=1

The crash occurs once the packages are re-installed, when the DNF helper calls qubes.NotifyUpdates followed by SyncAppMenus.

Backtrace

#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1  0x00007f72eab06895 in __GI_abort () at abort.c:79
#2  0x000055ada5076474 in qemu_ram_set_idstr (new_block=0x55ada8841950, name=<optimized out>,
    dev=dev@entry=0x55ada8270310) at /usr/src/debug/qemu-4.2.1-1.fc32.x86_64/exec.c:2070

    #2047:  /* Called with iothread lock held.  */
    #2048:  void qemu_ram_set_idstr(RAMBlock *new_block, const char *name, DeviceState *dev)
    #2049:  {
    #2050:      RAMBlock *block;
    #2051:
    #2052:      assert(new_block);
    #2053:      assert(!new_block->idstr[0]);
    #2054:
    #2055:      if (dev) {
    #2056:          char *id = qdev_get_dev_path(dev);
    #2057:          if (id) {
    #2058:              snprintf(new_block->idstr, sizeof(new_block->idstr), "%s/", id);
    #2059:              g_free(id);
    #2060:          }
    #2061:      }
    #2062:      pstrcat(new_block->idstr, sizeof(new_block->idstr), name);
    #2063:
    #2064:      RCU_READ_LOCK_GUARD();
    #2065:      RAMBLOCK_FOREACH(block) {
    #2066:          if (block != new_block &&
    #2067:              !strcmp(block->idstr, new_block->idstr)) {
    #2068:              fprintf(stderr, "RAMBlock \"%s\" already registered, abort!\n",
    #2069:                      new_block->idstr);
    #2070*:             abort();
    #2071:          }
    #2072:      }
    #2073:  }

#3  0x000055ada5360fa5 in vmstate_register_ram (mr=0x55ada8270d50, dev=0x55ada8270310)
    at /usr/src/debug/qemu-4.2.1-1.fc32.x86_64/migration/savevm.c:2913
#4  0x000055ada52bb81a in ivshmem_common_realize (dev=<optimized out>, errp=0x7ffde033bac0)
    at /usr/src/debug/qemu-4.2.1-1.fc32.x86_64/hw/misc/ivshmem.c:945
#5  0x000055ada52eb573 in pci_qdev_realize (qdev=0x55ada8270310, errp=<optimized out>)
    at /usr/src/debug/qemu-4.2.1-1.fc32.x86_64/hw/pci/pci.c:2099
#6  0x000055ada525bb55 in device_set_realized (obj=0x7ffde033bb60, value=<optimized out>, errp=0x7ffde033bc88)
    at /usr/src/debug/qemu-4.2.1-1.fc32.x86_64/hw/core/qdev.c:876
#7  0x000055ada53cb57b in property_set_bool (obj=0x55ada8270310, v=<optimized out>, name=<optimized out>,
    opaque=0x55ada84ea670, errp=0x7ffde033bc88) at /usr/src/debug/qemu-4.2.1-1.fc32.x86_64/qom/object.c:2078
#8  0x000055ada53d08b4 in object_property_set_qobject (obj=0x55ada8270310, value=<optimized out>,
    name=0x55ada5537fad "realized", errp=0x7ffde033bc88)
    at /usr/src/debug/qemu-4.2.1-1.fc32.x86_64/qom/qom-qobject.c:26
#9  0x000055ada53ce2ba in object_property_set_bool (obj=0x55ada8270310, value=<optimized out>,
    name=0x55ada5537fad "realized", errp=0x7ffde033bc88)
    at /usr/src/debug/qemu-4.2.1-1.fc32.x86_64/qom/object.c:1336
#10 0x000055ada51f0f48 in qdev_device_add (opts=opts@entry=0x55ada7d7cd90, errp=errp@entry=0x7ffde033bd60)
    at /usr/src/debug/qemu-4.2.1-1.fc32.x86_64/qdev-monitor.c:673
#11 0x000055ada51f12ab in qmp_device_add (qdict=<optimized out>, ret_data=<optimized out>, errp=0x7ffde033bdb8)
    at /usr/src/debug/qemu-4.2.1-1.fc32.x86_64/qdev-monitor.c:798
#12 0x000055ada54742c0 in do_qmp_dispatch (errp=0x7ffde033bdb0, allow_oob=<optimized out>,
    request=<optimized out>, cmds=0x55ada59ea580 <qmp_commands>)
    at /usr/src/debug/qemu-4.2.1-1.fc32.x86_64/qapi/qmp-dispatch.c:132
#13 qmp_dispatch (cmds=0x55ada59ea580 <qmp_commands>, request=<optimized out>, allow_oob=<optimized out>)
    at /usr/src/debug/qemu-4.2.1-1.fc32.x86_64/qapi/qmp-dispatch.c:175
#14 0x000055ada53726ed in monitor_qmp_dispatch (mon=0x55ada7277160, req=<optimized out>)
    at /usr/src/debug/qemu-4.2.1-1.fc32.x86_64/monitor/qmp.c:145
#15 0x000055ada5372dac in monitor_qmp_bh_dispatcher (data=<optimized out>)
    at /usr/src/debug/qemu-4.2.1-1.fc32.x86_64/monitor/qmp.c:234
#16 0x000055ada54bddac in aio_bh_call (bh=0x55ada71191c0)
    at /usr/src/debug/qemu-4.2.1-1.fc32.x86_64/util/async.c:117
#17 aio_bh_poll (ctx=ctx@entry=0x55ada7117d70) at /usr/src/debug/qemu-4.2.1-1.fc32.x86_64/util/async.c:117
#18 0x000055ada54c1494 in aio_dispatch (ctx=ctx@entry=0x55ada7117d70)
    at /usr/src/debug/qemu-4.2.1-1.fc32.x86_64/util/aio-posix.c:459
#19 0x000055ada54bda32 in aio_ctx_dispatch (source=source@entry=0x55ada7117d70, callback=<optimized out>,
    user_data=<optimized out>) at /usr/src/debug/qemu-4.2.1-1.fc32.x86_64/util/async.c:260
#20 0x00007f72ebea978f in g_main_dispatch (context=0x55ada71f6e20) at ../glib/gmain.c:3309
#21 g_main_context_dispatch (context=0x55ada71f6e20) at ../glib/gmain.c:3974
#22 0x000055ada54c06b8 in glib_pollfds_poll () at /usr/src/debug/qemu-4.2.1-1.fc32.x86_64/util/main-loop.c:219
#23 os_host_main_loop_wait (timeout=2064832694) at /usr/src/debug/qemu-4.2.1-1.fc32.x86_64/util/main-loop.c:242
#24 main_loop_wait (nonblocking=<optimized out>) at /usr/src/debug/qemu-4.2.1-1.fc32.x86_64/util/main-loop.c:518
#25 0x000055ada51f52af in main_loop () at /usr/src/debug/qemu-4.2.1-1.fc32.x86_64/vl.c:1812
#26 0x000055ada507c79c in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>)
    at /usr/src/debug/qemu-4.2.1-1.fc32.x86_64/vl.c:4473

Logs

# --- HOST ---------------------------------------------------------------------
# FC32 GUEST VM STARTING
02:47:12.732524-0400 HOST qubesd[1304168]: Starting fc32
02:47:12.736677-0400 HOST qubesd[1304168]: Created sparse file: /var/lib/qubes/vm-templates/fc32/volatile-dirty.img~hk8uk7r2
02:47:12.741247-0400 HOST qubesd[1304168]: Renamed file: /var/lib/qubes/vm-templates/fc32/volatile-dirty.img~hk8uk7r2 -> /var/lib/qubes/vm-templates/fc32/volatile-dirty.img
02:47:12.741414-0400 HOST qubesd[1304168]: Reflinked file: /var/lib/qubes/vm-templates/fc32/private.img -> /var/lib/qubes/vm-templates/fc32/private-dirty.img~mpktsdfq
02:47:12.744233-0400 HOST qubesd[1304168]: Renamed file: /var/lib/qubes/vm-templates/fc32/private-dirty.img~mpktsdfq -> /var/lib/qubes/vm-templates/fc32/private-dirty.img
02:47:12.802028-0400 HOST qubesd[1304168]: Reflinked file: /var/lib/qubes/vm-templates/fc32/root.img -> /var/lib/qubes/vm-templates/fc32/root-dirty.img~tohv2ajt
02:47:12.810621-0400 HOST qubesd[1304168]: Renamed file: /var/lib/qubes/vm-templates/fc32/root-dirty.img~tohv2ajt -> /var/lib/qubes/vm-templates/fc32/root-dirty.img
02:47:12.859244-0400 HOST kvmchand[1304148]: [INFO] daemon/libvirt.c:560: Domain fc32(-1, UUID: 12345678-c061-4f3c-877a-8f429d0ac438) changed state (event: 0, detail: 1)!
02:47:12.859244-0400 HOST kvmchand[1304148]: [WARN] daemon/libvirt.c:634: Unknown lifecycle event 10! Ignoring...
02:47:13.099835-0400 HOST kvmchand[1304148]: [INFO] daemon/libvirt.c:560: Domain fc32(67, UUID: 12345678-c061-4f3c-877a-8f429d0ac438) changed state (event: 2, detail: 0)!
02:47:13.100219-0400 HOST qubesd[1304168]: Setting Qubes DB info for the VM
02:47:13.102719-0400 HOST qubesd[1304168]: Starting Qubes DB
02:47:13.103236-0400 HOST kvmchand[1304148]: [INFO] daemon/libvirt.c:262: About to attach ivshmem device at /tmp/kvmchand/ivshmem_socket, index 0
02:47:13.103321-0400 HOST libvirtd[1245962]: Domain id=67 name='fc32' uuid=12345678-c061-4f3c-877a-8f429d0ac438 is tainted: custom-monitor
02:47:13.104239-0400 HOST kvmchand[1304152]: [INFO] daemon/ivshmem.c:534: Got connection from PID: 1304955
02:47:13.105718-0400 HOST kvmchand[1304148]: [INFO] daemon/libvirt.c:560: Domain fc32(67, UUID: 12345678-c061-4f3c-877a-8f429d0ac438) changed state (event: 3, detail: 0)!
02:47:13.105718-0400 HOST kvmchand[1304148]: [WARN] daemon/libvirt.c:634: Unknown lifecycle event 10! Ignoring...
02:47:13.109445-0400 HOST kvmchand[1304154]: [INFO] daemon/localhandler.c:620: Client connected! fd: 10

################################################################################
## SERVER: HOST: /usr/sbin/qubesdb-daemon 16 fc32                      PORT: 111
## CLIENT: VM:   /usr/sbin/qubesdb-daemon 0                               FD: 11
02:47:13.109690-0400 HOST kvmchand[1304146]: [INFO] daemon/connections.c:118: vchan_init called! server_dom: 0, client_dom: 67, port 111, read_min: 4096, write_min: 4096
02:47:13.110093-0400 HOST kvmchand[1304148]: [INFO] daemon/libvirt.c:262: About to attach ivshmem device at /tmp/kvmchand/ivshmem_socket, index 2
02:47:13.110844-0400 HOST kvmchand[1304152]: [INFO] daemon/ivshmem.c:534: Got connection from PID: 1304955
#02:47:13.114285-0400 HOST qubesd[1304168]: Activating the fc32 VM
02:47:13.115088-0400 HOST kvmchand[1304148]: [INFO] daemon/libvirt.c:560: Domain fc32(67, UUID: 12345678-c061-4f3c-877a-8f429d0ac438) changed state (event: 4, detail: 0)!
02:47:13.115088-0400 HOST kvmchand[1304148]: [WARN] daemon/libvirt.c:634: Unknown lifecycle event 10! Ignoring...
02:47:13.158283-0400 HOST kvmchand[1304154]: [INFO] daemon/localhandler.c:620: Client connected! fd: 11

02:47:13.158438-0400 HOST kvmchand[1304146]: [WARN] daemon/connections.c:287: Tried to connect to non-existent vchan at dom 67 port 512
02:47:14.158655-0400 HOST kvmchand[1304146]: [WARN] daemon/connections.c:287: Tried to connect to non-existent vchan at dom 67 port 512
02:47:15.158901-0400 HOST kvmchand[1304146]: [WARN] daemon/connections.c:287: Tried to connect to non-existent vchan at dom 67 port 512
02:47:16.159050-0400 HOST kvmchand[1304146]: [WARN] daemon/connections.c:287: Tried to connect to non-existent vchan at dom 67 port 512
02:47:17.159224-0400 HOST kvmchand[1304146]: [WARN] daemon/connections.c:287: Tried to connect to non-existent vchan at dom 67 port 512
02:47:18.159419-0400 HOST kvmchand[1304146]: [WARN] daemon/connections.c:287: Tried to connect to non-existent vchan at dom 67 port 512
02:47:19.159625-0400 HOST kvmchand[1304146]: [WARN] daemon/connections.c:287: Tried to connect to non-existent vchan at dom 67 port 512
02:47:20.159797-0400 HOST kvmchand[1304146]: [WARN] daemon/connections.c:287: Tried to connect to non-existent vchan at dom 67 port 512
02:47:21.159977-0400 HOST kvmchand[1304146]: [WARN] daemon/connections.c:287: Tried to connect to non-existent vchan at dom 67 port 512

# --- VM -----------------------------------------------------------------------
02:47:21.657177-0400 VM   systemd[1]: Starting KVM vchan daemon...
02:47:21.725840-0400 VM   kernel: vfio-pci 0000:00:10.0: Adding to iommu group 0
02:47:21.725994-0400 VM   kernel: vfio-pci 0000:00:10.0: Adding kernel taint for vfio-noiommu group on device
02:47:21.726065-0400 VM   kvmchand[513]: [INFO] daemon/vfio.c:1205: Got ivshmem device: 0000:00:10.0
02:47:21.726065-0400 VM   kvmchand[513]: [INFO] daemon/vfio.c:1205: Got ivshmem device: 0000:00:11.0
02:47:21.726065-0400 VM   kvmchand[513]: [WARN] daemon/vfio.c:1221: Some ivshmem devices arent bound to vfio-pci. Attempting to bind...
02:47:21.727760-0400 VM   kernel: vfio-pci 0000:00:11.0: Adding to iommu group 1
02:47:21.727854-0400 VM   kernel: vfio-pci 0000:00:11.0: Adding kernel taint for vfio-noiommu group on device
02:47:21.728015-0400 VM   kvmchand[514]: [INFO] daemon/localhandler.c:580: guest_main: sd_notify
02:47:21.728262-0400 VM   kvmchand[513]: [WARN] daemon/vfio.c:1236: Successfully bound ivshmem devices.
02:47:21.729695-0400 VM   systemd[1]: Started KVM vchan daemon.
02:47:21.730141-0400 VM   kernel: vfio-pci 0000:00:10.0: vfio-noiommu device opened by user (kvmchand:513)
02:47:21.730984-0400 VM   systemd[1]: Starting Qubes DB agent...
02:47:21.734145-0400 VM   kernel: vfio-pci 0000:00:11.0: vfio-noiommu device opened by user (kvmchand:513)
02:47:21.735217-0400 VM   kvmchand[513]: [INFO] daemon/vfio.c:993: Successfully connected to host daemon.
02:47:21.736210-0400 VM   kvmchand[514]: [INFO] daemon/localhandler.c:620: Client connected! fd: 8
02:47:21.741656-0400 VM   kvmchand[513]: [INFO] daemon/vfio.c:1117: resp from host: err: 0, ret: 2

# --- HOST ---------------------------------------------------------------------
02:47:22.160137-0400 HOST kvmchand[1304146]: [WARN] daemon/connections.c:287: Tried to connect to non-existent vchan at dom 67 port 512
02:47:23.160324-0400 HOST kvmchand[1304146]: [WARN] daemon/connections.c:287: Tried to connect to non-existent vchan at dom 67 port 512
02:47:24.160530-0400 HOST kvmchand[1304146]: [WARN] daemon/connections.c:287: Tried to connect to non-existent vchan at dom 67 port 512
02:47:25.160718-0400 HOST kvmchand[1304146]: [WARN] daemon/connections.c:287: Tried to connect to non-existent vchan at dom 67 port 512

# --- VM -----------------------------------------------------------------------
02:47:25.742330-0400 VM   kvmchand[514]: [INFO] daemon/localhandler.c:328: fds: {9, 10, 11, 12, 13}
02:47:25.743238-0400 VM   systemd[1]: Started Qubes DB agent.

02:47:26.122120-0400 VM   systemd[1]: Starting Qubes remote exec agent...
02:47:26.125300-0400 VM   kvmchand[514]: [INFO] daemon/localhandler.c:620: Client connected! fd: 9
02:47:26.132978-0400 VM   kvmchand[513]: [INFO] daemon/vfio.c:1117: resp from host: err: 0, ret: 3
02:47:26.136249-0400 VM   kernel: pci 0000:00:12.0: [1af4:1110] type 00 class 0x050000
02:47:26.136302-0400 VM   kernel: pci 0000:00:12.0: reg 0x10: [mem 0x00000000-0x000000ff]
02:47:26.136320-0400 VM   kernel: pci 0000:00:12.0: reg 0x14: [mem 0x00000000-0x00000fff]
02:47:26.136337-0400 VM   kernel: pci 0000:00:12.0: reg 0x18: [mem 0x00000000-0x00003fff 64bit pref]
02:47:26.149933-0400 VM   kernel: pci 0000:00:12.0: BAR 2: assigned [mem 0x800134000-0x800137fff 64bit pref]
02:47:26.150198-0400 VM   kernel: pci 0000:00:12.0: BAR 1: assigned [mem 0xc844e000-0xc844efff]
02:47:26.150380-0400 VM   kernel: pci 0000:00:12.0: BAR 0: assigned [mem 0xc844f000-0xc844f0ff]
02:47:26.150482-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:26.150657-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:26.150814-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:26.150894-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:26.150972-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:26.151053-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]

# --- HOST ---------------------------------------------------------------------
02:47:26.160893-0400 HOST kvmchand[1304146]: [WARN] daemon/connections.c:287: Tried to connect to non-existent vchan at dom 67 port 512
02:47:27.161060-0400 HOST kvmchand[1304146]: [WARN] daemon/connections.c:287: Tried to connect to non-existent vchan at dom 67 port 512

# --- VM -----------------------------------------------------------------------
02:47:26.161158-0400 VM   kernel: vfio-pci 0000:00:12.0: Adding to iommu group 2
02:47:26.161328-0400 VM   kernel: vfio-pci 0000:00:12.0: Adding kernel taint for vfio-noiommu group on device
02:47:26.173706-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:26.173867-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:26.173946-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:26.174024-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:26.174102-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:26.174206-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
02:47:26.194930-0400 VM   kernel: pci_bus 0000:01: Allocating resources
02:47:26.195031-0400 VM   kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000
02:47:26.195118-0400 VM   kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x800200000-0x8003fffff 64bit pref]
02:47:26.195707-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:26.196229-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:26.199440-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:26.199539-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
02:47:26.201487-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:26.201594-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:26.202233-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
02:47:26.208155-0400 VM   kernel: pci_bus 0000:02: Allocating resources
02:47:26.210159-0400 VM   kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000
02:47:26.210275-0400 VM   kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x800400000-0x8005fffff 64bit pref]
02:47:26.211504-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:26.211604-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:26.213373-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:26.216436-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
02:47:26.218372-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:26.218516-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:26.218635-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
02:47:26.220573-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
02:47:26.223333-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:26.223487-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:26.223620-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:26.225941-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
02:47:26.228352-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:26.228494-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:26.229162-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
02:47:26.232332-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
02:47:26.232532-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:26.233524-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:26.236171-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:26.238344-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
02:47:26.239805-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:26.239949-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:26.242589-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
02:47:26.242742-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
02:47:26.244715-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:26.244823-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:26.248984-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:26.249125-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
02:47:26.255792-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:26.255935-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:26.256054-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
02:47:26.256182-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
02:47:26.256326-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:26.256435-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:26.260574-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:26.260720-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
02:47:26.267620-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:26.267759-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:26.267865-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
02:47:26.267969-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
02:47:26.268081-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:26.273538-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:26.273677-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:26.273790-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
02:47:26.273896-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:26.274000-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:26.276312-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
02:47:26.278352-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
02:47:26.278501-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:26.280431-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:26.283522-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:26.283715-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
02:47:26.286614-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:26.286799-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:26.287449-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
02:47:26.290356-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
02:47:26.293683-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:26.293875-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:26.296900-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:26.297047-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
02:47:26.300209-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:26.300350-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:26.300458-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
02:47:26.304161-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
02:47:26.307951-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:26.308048-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:26.308128-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:26.308220-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
02:47:26.316242-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:26.316385-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:26.316498-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
02:47:26.316608-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
02:47:26.317419-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:26.322139-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:26.322249-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:26.322325-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
02:47:26.322399-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:26.322473-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:26.325421-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
02:47:26.325519-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
02:47:26.328303-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:26.328401-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:26.329659-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:26.330144-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
02:47:26.333616-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:26.333733-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:26.333842-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
02:47:26.339070-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
02:47:26.339235-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:26.340872-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:26.343922-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:26.344024-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
02:47:26.344104-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:26.349932-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:26.350090-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
02:47:26.350236-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
02:47:26.350353-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:26.352508-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:26.352656-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:26.355384-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
02:47:26.355479-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:26.356487-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:26.360801-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
02:47:26.360941-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
02:47:26.364772-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:26.364912-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:26.365026-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:26.367434-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
02:47:26.372405-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:26.372586-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:26.372724-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
02:47:26.372858-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
02:47:26.377351-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:26.377490-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:26.377596-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:26.377701-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
02:47:26.379384-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:26.379578-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:26.384631-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
02:47:26.384778-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
02:47:26.384890-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:26.386511-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:26.386654-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:26.395952-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
02:47:26.396100-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:26.396233-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:26.396343-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
02:47:26.396453-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]

# --- HOST ---------------------------------------------------------------------
################################################################################
## SERVER: VM:   /usr/lib/qubes/qrexec-agent                           PORT: 512
## CLIENT: HOST: /usr/sbin/qrexec-daemon -q 19 fc32 user                  FD: 12
02:47:27.642207-0400 HOST kvmchand[1304146]: [INFO] daemon/connections.c:118: vchan_init called! server_dom: 67, client_dom: 0, port 512, read_min: 4096, write_min: 4096
02:47:27.642784-0400 HOST kvmchand[1304148]: [INFO] daemon/libvirt.c:262: About to attach ivshmem device at /tmp/kvmchand/ivshmem_socket, index 3
02:47:27.643633-0400 HOST kvmchand[1304152]: [INFO] daemon/ivshmem.c:534: Got connection from PID: 1304955

# --- VM -----------------------------------------------------------------------
02:47:30.139252-0400 VM   kernel: vfio-pci 0000:00:12.0: vfio-noiommu device opened by user (kvmchand:517)
02:47:30.139728-0400 VM   kvmchand[514]: [INFO] daemon/localhandler.c:328: fds: {10, 11, 12, 13, 14}
02:47:30.649353-0400 VM   systemd[1]: Started Qubes remote exec agent.

02:47:30.651699-0400 VM   systemd[1]: Starting Qubes GUI Agent...
02:47:30.660185-0400 VM   systemd[1]: Started Qubes GUI Agent.

02:47:30.668374-0400 VM   kvmchand[514]: [INFO] daemon/localhandler.c:620: Client connected! fd: 10
02:47:30.673725-0400 VM   kvmchand[513]: [INFO] daemon/vfio.c:1117: resp from host: err: 0, ret: 4
02:47:30.676763-0400 VM   kernel: pci 0000:00:13.0: [1af4:1110] type 00 class 0x050000
02:47:30.676854-0400 VM   kernel: pci 0000:00:13.0: reg 0x10: [mem 0x00000000-0x000000ff]
02:47:30.676892-0400 VM   kernel: pci 0000:00:13.0: reg 0x14: [mem 0x00000000-0x00000fff]

02:47:30.747881-0400 VM   kernel: pci 0000:00:13.0: BAR 2: assigned [mem 0x800140000-0x80017ffff 64bit pref]
02:47:30.748092-0400 VM   kernel: pci 0000:00:13.0: BAR 1: assigned [mem 0xc8450000-0xc8450fff]
02:47:30.748171-0400 VM   kernel: pci 0000:00:13.0: BAR 0: assigned [mem 0xc8451000-0xc84510ff]
02:47:30.748238-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:30.748320-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:30.748393-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:30.748475-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
02:47:30.748550-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:30.748628-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:30.748702-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
02:47:30.748775-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
02:47:30.748852-0400 VM   kernel: vfio-pci 0000:00:13.0: Adding to iommu group 3
02:47:30.748917-0400 VM   kernel: vfio-pci 0000:00:13.0: Adding kernel taint for vfio-noiommu group on device

02:47:30.748979-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:30.749051-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:30.749126-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:30.749210-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
02:47:30.749303-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:30.749376-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:30.749451-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
02:47:30.749526-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
02:47:30.749621-0400 VM   kernel: pci_bus 0000:01: Allocating resources
02:47:30.749701-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:30.749800-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:30.749902-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:30.750006-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
02:47:30.750093-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:30.750183-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:30.750278-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
02:47:30.750360-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
02:47:30.750437-0400 VM   kernel: pci_bus 0000:02: Allocating resources
02:47:30.750505-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:30.750585-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:30.750662-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:30.750739-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
02:47:30.750815-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:30.750890-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:30.750975-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
02:47:30.751055-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
02:47:30.751235-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:30.751322-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:30.751400-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:30.751475-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
02:47:30.751551-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:30.751627-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:30.751702-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
02:47:30.751775-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
02:47:30.751872-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:30.751962-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:30.752037-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:30.752111-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
02:47:30.752198-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:30.752286-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:30.752361-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
02:47:30.752434-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
02:47:30.752509-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:30.752587-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:30.752662-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:30.752738-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
02:47:30.752811-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:30.752886-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:30.752961-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
02:47:30.753044-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
02:47:30.753122-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:30.753207-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:30.753295-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:30.753374-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
02:47:30.753449-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:30.753526-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:30.753631-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
02:47:30.753707-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
02:47:30.753781-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:30.753858-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:30.753935-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:30.754010-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
02:47:30.754086-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:30.756720-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:30.756813-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
02:47:30.756893-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
02:47:30.756965-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:30.757037-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:30.757114-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:30.757211-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
02:47:30.757302-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:30.757374-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:30.757448-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
02:47:30.757520-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
02:47:30.836239-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:30.836361-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:30.836438-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:30.836522-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
02:47:30.836602-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:30.836683-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:30.836756-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
02:47:30.836830-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
02:47:30.836901-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:30.836973-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:30.837046-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:30.837121-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
02:47:30.837207-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:30.837287-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:30.837360-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
02:47:30.837436-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
02:47:30.837509-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:30.837580-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:30.837652-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:30.837724-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
02:47:30.837799-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:30.837881-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:30.837954-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
02:47:30.838026-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
02:47:30.838099-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:30.838184-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:30.838263-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:30.838339-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
02:47:30.838411-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:30.838484-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:30.838557-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
02:47:30.838628-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
02:47:30.838699-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:30.838771-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:30.838844-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:30.838918-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
02:47:30.838990-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:30.839064-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:30.839150-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
02:47:30.839239-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
02:47:30.839314-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:30.839387-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:30.839459-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:30.839531-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
02:47:30.839602-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:30.839678-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:30.839752-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
02:47:30.839828-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
02:47:30.839906-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:30.839978-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:30.840049-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:30.840120-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
02:47:30.840452-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:30.840530-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:30.840611-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
02:47:30.840684-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
02:47:30.840756-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:30.840828-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:30.840906-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:30.840980-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
02:47:30.841052-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:30.841124-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:30.841217-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
02:47:30.841294-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
02:47:30.841368-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:30.841442-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:30.841515-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:30.841595-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
02:47:30.841668-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:30.841742-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:30.841815-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
02:47:30.841887-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
02:47:30.841961-0400 VM   kernel: pci 0000:00:02.0: PCI bridge to [bus 01]
02:47:30.842035-0400 VM   kernel: pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
02:47:30.842109-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
02:47:30.842191-0400 VM   kernel: pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
02:47:30.842279-0400 VM   kernel: pci 0000:00:03.0: PCI bridge to [bus 02]
02:47:30.842350-0400 VM   kernel: pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
02:47:30.842420-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
02:47:30.842491-0400 VM   kernel: pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]

# --- HOST ---------------------------------------------------------------------
################################################################################
## SERVER: VM:                                                         PORT: 513
## CLIENT: HOST:                                                          FD: 12
02:47:32.179271-0400 HOST qrexec-policy-daemon[1304175]: qrexec: qubes.GetDate+: fc32 -> @default: allowed to @adminvm
02:47:32.181341-0400 HOST kvmchand[1304154]: [INFO] daemon/localhandler.c:620: Client connected! fd: 12
02:47:32.181699-0400 HOST kvmchand[1304146]: [WARN] daemon/connections.c:287: Tried to connect to non-existent vchan at dom 67 port 513
02:47:32.182546-0400 HOST kvmchand[1304146]: [INFO] daemon/connections.c:118: vchan_init called! server_dom: 67, client_dom: 0, port 513, read_min: 65536, write_min: 65536
02:47:32.183150-0400 HOST kvmchand[1304148]: [INFO] daemon/libvirt.c:262: About to attach ivshmem device at /tmp/kvmchand/ivshmem_socket, index 4
02:47:32.184300-0400 HOST kvmchand[1304152]: [INFO] daemon/ivshmem.c:534: Got connection from PID: 1304955

# --- VM -----------------------------------------------------------------------
02:47:34.683189-0400 VM   kernel: vfio-pci 0000:00:13.0: vfio-noiommu device opened by user (kvmchand:517)
02:47:34.683742-0400 VM   kvmchand[514]: [INFO] daemon/localhandler.c:328: fds: {11, 12, 13, 14, 15}
02:47:34.734956-0400 VM   qvm-sync-clock[1180]: Invalid date received, aborting!
02:47:34.739376-0400 VM   kvmchand[513]: [INFO] daemon/vfio.c:1117: resp from host: err: 0, ret: 0
02:47:34.739658-0400 VM   kvmchand[514]: [INFO] daemon/localhandler.c:628: Client disconnected! fd: 10
02:47:34.739898-0400 VM   systemd[1]: qubes-sync-time.service: Succeeded.

02:47:34.741091-0400 VM   kvmchand[513]: [INFO] daemon/vfio.c:1257: VFIO requesting ivshmem detach. fd: 42
02:47:34.741141-0400 VM   kernel: vfio-pci 0000:00:13.0: Relaying device request to user (#0)
02:47:34.741282-0400 VM   kvmchand[513]: [INFO] daemon/vfio.c:703: Stopping outgoing eventfd handler thread.
02:47:34.744835-0400 VM   kvmchand[513]: [INFO] daemon/vfio.c:205: Removing unused group /dev/vfio/noiommu-3.
02:47:34.747146-0400 VM   kernel: vfio-pci 0000:00:13.0: Removing from iommu group 3

# --- HOST ---------------------------------------------------------------------
02:47:36.248336-0400 HOST kvmchand[1304154]: [INFO] daemon/localhandler.c:628: Client disconnected! fd: 12
# Libvirt not notified of shmem4 removal?
02:47:36.316634-0400 HOST libvirtd[1245962]: internal error: no device found with alias shmem4

# --- VM -----------------------------------------------------------------------
# LOGIN
02:49:24.371085-0400 VM   login[1140]: ROOT LOGIN ON ttyS0

# Re-install qubes-core-qrexec-* packages
02:50:09.306000-0400 VM   audit[1486]: USER_ACCT pid=1486 uid=0 auid=0 ses=2 msg='op=PAM:accounting grantors=pam_unix acct="root" exe="/usr/bin/sudo" hostname=fc32 addr=? terminal=/dev/ttyS0 res=success'
02:50:09.307351-0400 VM   sudo[1486]:     root : TTY=ttyS0 ; PWD=/rpm ; USER=root ; COMMAND=/usr/bin/dnf -y reinstall qubes-core-qrexec-vm-4.1.8-1.fc32.x86_64.rpm qubes-core-qrexec-vm-debuginfo-4.1.8-1.fc32.x86_64.rpm qubes-core-qrexec-vm-debugsource-4.1.8-1.fc32.x86_64.rpm qubes-core-qrexec-4.1.8-1.fc32.x86_64.rpm qubes-core-qrexec-debugsource-4.1.8-1.fc32.x86_64.rpm qubes-core-qrexec-libs-4.1.8-1.fc32.x86_64.rpm qubes-core-qrexec-libs-debuginfo-4.1.8-1.fc32.x86_64.rpm

02:50:09.308038-0400 VM   kernel: kauditd_printk_skb: 6 callbacks suppressed
02:50:09.308083-0400 VM   kernel: audit: type=1101 audit(1600224609.306:180): pid=1486 uid=0 auid=0 ses=2 msg='op=PAM:accounting grantors=pam_unix acct="root" exe="/usr/bin/sudo" hostname=fc32 addr=? terminal=/dev/ttyS0 res=success'
02:50:09.307000-0400 VM   audit[1486]: USER_CMD pid=1486 uid=0 auid=0 ses=2 msg='cwd="/rpm" cmd=646E66202D79207265696E7374616C6C2071756265732D636F72652D7172657865632D766D2D342E312E382D312E666333322E7838365F36342E72706D2071756265732D636F72652D7172657865632D766D2D6465627567696E666F2D342E312E382D312E666333322E7838365F36342E72706D2071756265732D636F72652D7172657865632D766D2D6465627567736F757263652D342E312E382D312E666333322E7838365F36342E72706D2071756265732D636F72652D7172657865632D342E312E382D312E666333322E7838365F36342E72706D2071756265732D636F72652D7172657865632D6465627567736F757263652D342E312E382D312E666333322E7838365F36342E72706D2071756265732D636F72652D7172657865632D6C6962732D342E312E382D312E666333322E7838365F36342E72706D2071756265732D636F72652D7172657865632D6C6962732D6465627567696E666F2D342E312E382D312E666333322E7838365F36342E72706D exe="/usr/bin/sudo" terminal=ttyS0 res=success'

02:50:09.317877-0400 VM   kernel: audit: type=1123 audit(1600224609.307:181): pid=1486 uid=0 auid=0 ses=2 msg='cwd="/rpm" cmd=646E66202D79207265696E7374616C6C2071756265732D636F72652D7172657865632D766D2D342E312E382D312E666333322E7838365F36342E72706D2071756265732D636F72652D7172657865632D766D2D6465627567696E666F2D342E312E382D312E666333322E7838365F36342E72706D2071756265732D636F72652D7172657865632D766D2D6465627567736F757263652D342E312E382D312E666333322E7838365F36342E72706D2071756265732D636F72652D7172657865632D342E312E382D312E666333322E7838365F36342E72706D2071756265732D636F72652D7172657865632D6465627567736F757263652D342E312E382D312E666333322E7838365F36342E72706D2071756265732D636F72652D7172657865632D6C6962732D342E312E382D312E666333322E7838365F36342E72706D2071756265732D636F72652D7172657865632D6C6962732D6465627567696E666F2D342E312E382D312E666333322E7838365F36342E72706D exe="/usr/bin/sudo" terminal=ttyS0 res=success'


02:50:09.307000-0400 VM   audit[1486]: CRED_REFR pid=1486 uid=0 auid=0 ses=2 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/bin/sudo" hostname=fc32 addr=? terminal=/dev/ttyS0 res=success'

02:50:09.319647-0400 VM   kernel: audit: type=1110 audit(1600224609.307:182): pid=1486 uid=0 auid=0 ses=2 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/bin/sudo" hostname=fc32 addr=? terminal=/dev/ttyS0 res=success'

02:50:09.320212-0400 VM   sudo[1486]: pam_unix(sudo:session): session opened for user root by root(uid=0)
02:50:09.320000-0400 VM   audit[1486]: USER_START pid=1486 uid=0 auid=0 ses=2 msg='op=PAM:session_open grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix acct="root" exe="/usr/bin/sudo" hostname=fc32 addr=? terminal=/dev/ttyS0 res=success'
02:50:09.323155-0400 VM   kernel: audit: type=1105 audit(1600224609.320:183): pid=1486 uid=0 auid=0 ses=2 msg='op=PAM:session_open grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix acct="root" exe="/usr/bin/sudo" hostname=fc32 addr=? terminal=/dev/ttyS0 res=success'

# started man-db-cache-update (start never appeared previously)
02:50:14.089879-0400 VM   systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
02:50:14.090000-0400 VM   audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=run-re442de80587448479c38bc78197aae68 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
02:50:14.095178-0400 VM   kernel: audit: type=1130 audit(1600224614.090:184): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=run-re442de80587448479c38bc78197aae68 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'

# starting/started man-db-cache-update
02:50:14.097842-0400 VM   systemd[1]: Starting man-db-cache-update.service...
02:50:14.153211-0400 VM   systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.

02:50:14.153000-0400 VM   audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=run-rfa8dcb61d32f4611a9183100190181e5 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
02:50:14.156146-0400 VM   kernel: audit: type=1130 audit(1600224614.153:185): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=run-rfa8dcb61d32f4611a9183100190181e5 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'

# **** NO MORE VM LOG ENTRIES **************************************************

# --- HOST ---------------------------------------------------------------------
################################################################################
## SERVER: VM:                                                         PORT: 513
## CLIENT: HOST:                                                          FD: 12
02:50:16.156667-0400 HOST qrexec-policy-daemon[1304175]: qrexec: qubes.NotifyUpdates+: fc32 -> @adminvm: allowed to @adminvm
02:50:16.158769-0400 HOST kvmchand[1304154]: [INFO] daemon/localhandler.c:620: Client connected! fd: 12
02:50:16.159157-0400 HOST kvmchand[1304146]: [WARN] daemon/connections.c:287: Tried to connect to non-existent vchan at dom 67 port 513
02:50:16.159157-0400 HOST kvmchand[1304146]: [INFO] daemon/connections.c:118: vchan_init called! server_dom: 67, client_dom: 0, port 513, read_min: 65536, write_min: 65536
02:50:16.159432-0400 HOST kvmchand[1304148]: [INFO] daemon/libvirt.c:262: About to attach ivshmem device at /tmp/kvmchand/ivshmem_socket, index 5
02:50:16.160104-0400 HOST kvmchand[1304152]: [INFO] daemon/ivshmem.c:534: Got connection from PID: 1304955
02:50:21.189078-0400 HOST kvmchand[1304154]: [INFO] daemon/localhandler.c:628: Client disconnected! fd: 12

################################################################################
## SERVER: VM:                                                         PORT: 513
## CLIENT: HOST:                                                          FD: 12
02:50:21.207168-0400 HOST qrexec-policy-daemon[1304175]: qrexec: qubes.SyncAppMenus+: fc32 -> @adminvm: allowed to @adminvm
02:50:21.209038-0400 HOST kvmchand[1304154]: [INFO] daemon/localhandler.c:620: Client connected! fd: 12
02:50:21.209125-0400 HOST kvmchand[1304146]: [WARN] daemon/connections.c:287: Tried to connect to non-existent vchan at dom 67 port 513
02:50:21.209454-0400 HOST kvmchand[1304146]: [INFO] daemon/connections.c:118: vchan_init called! server_dom: 67, client_dom: 0, port 513, read_min: 65536, write_min: 65536
02:50:21.209775-0400 HOST kvmchand[1304148]: [INFO] daemon/libvirt.c:262: About to attach ivshmem device at /tmp/kvmchand/ivshmem_socket, index 6
02:50:21.210639-0400 HOST kvmchand[1304152]: [INFO] daemon/ivshmem.c:534: Got connection from PID: 1304955

# --- CRASH --------------------------------------------------------------------------------------------------
02:50:21.212000-0400 HOST audit[1304955]: ANOM_ABEND auid=4294967295 uid=107 gid=107 ses=4294967295 pid=1304955 comm="qemu-system-x86" exe="/usr/bin/qemu-system-x86_64" sig=6 res=1
02:50:21.233185-0400 HOST systemd[1]: Started Process Core Dump (PID 1305235/UID 0).

02:50:21.474435-0400 HOST kvmchand[1304152]: [ERROR] daemon/ivshmem.c:905: Received unknown message from client on fd 35
02:50:21.474435-0400 HOST kvmchand[1304152]: [INFO] daemon/ivshmem.c:910: Client disconnected! fd: 35
02:50:21.474435-0400 HOST kvmchand[1304152]: [INFO] daemon/ivshmem.c:910: Client disconnected! fd: 29
02:50:21.474435-0400 HOST kvmchand[1304152]: [INFO] daemon/ivshmem.c:910: Client disconnected! fd: 23
02:50:21.495032-0400 HOST kvmchand[1304152]: [INFO] daemon/ivshmem.c:910: Client disconnected! fd: 10
02:50:21.495063-0400 HOST libvirtd[1245962]: Unable to read from monitor: Connection reset by peer
02:50:21.495394-0400 HOST kvmchand[1304148]: libvirt: QEMU Driver error : Unable to read from monitor: Connection reset by peer
02:50:21.495394-0400 HOST kvmchand[1304148]: [71B blob data]
02:50:21.495394-0400 HOST kvmchand[1304148]: [ERROR] daemon/libvirt.c:710: Failed to attach ivshmem to id 67!
02:50:21.495469-0400 HOST kvmchand[1304146]: [ERROR] daemon/connections.c:233: Failed to attach ivshmem device to server domain 67.
02:50:21.495498-0400 HOST kvmchand[1304152]: [INFO] daemon/ivshmem.c:501: Stopping listener thread.

02:50:21.535406-0400 HOST systemd-machined[2427]: Machine qemu-67-fc32 terminated.
02:50:21.703228-0400 HOST systemd-coredump[1305245]: Process 1304955 (qemu-system-x86) of user 107 dumped core.

                           Stack trace of thread 1304955:
                           #0  0x00007f72eab1d9e5 __GI_raise (libc.so.6 + 0x3c9e5)
                           #1  0x00007f72eab06895 __GI_abort (libc.so.6 + 0x25895)
                           #2  0x000055ada5076474 qemu_ram_set_idstr (qemu-system-x86_64 + 0x2f1474)
                           #3  0x000055ada5360fa5 vmstate_register_ram (qemu-system-x86_64 + 0x5dbfa5)
                           #4  0x000055ada52bb81a ivshmem_common_realize (qemu-system-x86_64 + 0x53681a)
                           #5  0x000055ada52eb573 pci_qdev_realize (qemu-system-x86_64 + 0x566573)
                           #6  0x000055ada525bb55 device_set_realized (qemu-system-x86_64 + 0x4d6b55)
                           #7  0x000055ada53cb57b property_set_bool (qemu-system-x86_64 + 0x64657b)
                           #8  0x000055ada53d08b4 object_property_set_qobject (qemu-system-x86_64 + 0x64b8b4)
                           #9  0x000055ada53ce2ba object_property_set_bool (qemu-system-x86_64 + 0x6492ba)
                           #10 0x000055ada51f0f48 qdev_device_add (qemu-system-x86_64 + 0x46bf48)
                           #11 0x000055ada51f12ab qmp_device_add (qemu-system-x86_64 + 0x46c2ab)
                           #12 0x000055ada54742c0 do_qmp_dispatch (qemu-system-x86_64 + 0x6ef2c0)
                           #13 0x000055ada53726ed monitor_qmp_dispatch (qemu-system-x86_64 + 0x5ed6ed)
                           #14 0x000055ada5372dac monitor_qmp_bh_dispatcher (qemu-system-x86_64 + 0x5eddac)
                           #15 0x000055ada54bddac aio_bh_call (qemu-system-x86_64 + 0x738dac)
                           #16 0x000055ada54c1494 aio_dispatch (qemu-system-x86_64 + 0x73c494)
                           #17 0x000055ada54bda32 aio_ctx_dispatch (qemu-system-x86_64 + 0x738a32)
                           #18 0x00007f72ebea978f g_main_dispatch (libglib-2.0.so.0 + 0x5278f)
                           #19 0x000055ada54c06b8 glib_pollfds_poll (qemu-system-x86_64 + 0x73b6b8)
                           #20 0x000055ada51f52af main_loop (qemu-system-x86_64 + 0x4702af)
                           #21 0x000055ada507c79c main (qemu-system-x86_64 + 0x2f779c)
                           #22 0x00007f72eab08042 __libc_start_main (libc.so.6 + 0x27042)
                           #23 0x000055ada507fafe _start (qemu-system-x86_64 + 0x2faafe)

                           Stack trace of thread 1304978:
                           #0  0x00007f72eabd93bb ioctl (libc.so.6 + 0xf83bb)
                           #1  0x000055ada50e7519 kvm_vcpu_ioctl (qemu-system-x86_64 + 0x362519)
                           #2  0x000055ada50e75d9 kvm_cpu_exec (qemu-system-x86_64 + 0x3625d9)
                           #3  0x000055ada50cbaac qemu_kvm_cpu_thread_fn (qemu-system-x86_64 + 0x346aac)
                           #4  0x000055ada54c3683 qemu_thread_start (qemu-system-x86_64 + 0x73e683)
                           #5  0x00007f72eacb4432 start_thread (libpthread.so.0 + 0x9432)
                           #6  0x00007f72eabe2913 __clone (libc.so.6 + 0x101913)

                           Stack trace of thread 1305157:
                           #0  0x00007f72eacbdc58 futex_abstimed_wait_cancelable (libpthread.so.0 + 0x12c58)
                           #1  0x00007f72eacbdd83 __new_sem_wait_slow (libpthread.so.0 + 0x12d83)
                           #2  0x000055ada54c3d6f qemu_sem_timedwait (qemu-system-x86_64 + 0x73ed6f)
                           #3  0x000055ada54be864 worker_thread (qemu-system-x86_64 + 0x739864)
                           #4  0x000055ada54c3683 qemu_thread_start (qemu-system-x86_64 + 0x73e683)
                           #5  0x00007f72eacb4432 start_thread (libpthread.so.0 + 0x9432)
                           #6  0x00007f72eabe2913 __clone (libc.so.6 + 0x101913)

                           Stack trace of thread 1305156:
                           #0  0x00007f72eacbdc58 futex_abstimed_wait_cancelable (libpthread.so.0 + 0x12c58)
                           #1  0x00007f72eacbdd83 __new_sem_wait_slow (libpthread.so.0 + 0x12d83)
                           #2  0x000055ada54c3d6f qemu_sem_timedwait (qemu-system-x86_64 + 0x73ed6f)
                           #3  0x000055ada54be864 worker_thread (qemu-system-x86_64 + 0x739864)
                           #4  0x000055ada54c3683 qemu_thread_start (qemu-system-x86_64 + 0x73e683)
                           #5  0x00007f72eacb4432 start_thread (libpthread.so.0 + 0x9432)
                           #6  0x00007f72eabe2913 __clone (libc.so.6 + 0x101913)

                           Stack trace of thread 1304982:
                           #0  0x00007f72eabd7aaf __GI___poll (libc.so.6 + 0xf6aaf)
                           #1  0x00007f72ebea9aae g_main_context_poll (libglib-2.0.so.0 + 0x52aae)
                           #2  0x00007f72ebea9e33 g_main_loop_run (libglib-2.0.so.0 + 0x52e33)
                           #3  0x00007f72eb76d2db red_worker_main (libspice-server.so.1 + 0x542db)
                           #4  0x00007f72eacb4432 start_thread (libpthread.so.0 + 0x9432)
                           #5  0x00007f72eabe2913 __clone (libc.so.6 + 0x101913)

                           Stack trace of thread 1304977:
                           #0  0x00007f72eabd7aaf __GI___poll (libc.so.6 + 0xf6aaf)
                           #1  0x00007f72ebea9aae g_main_context_poll (libglib-2.0.so.0 + 0x52aae)
                           #2  0x00007f72ebea9e33 g_main_loop_run (libglib-2.0.so.0 + 0x52e33)
                           #3  0x000055ada51eefb1 iothread_run (qemu-system-x86_64 + 0x469fb1)
                           #4  0x000055ada54c3683 qemu_thread_start (qemu-system-x86_64 + 0x73e683)
                           #5  0x00007f72eacb4432 start_thread (libpthread.so.0 + 0x9432)
                           #6  0x00007f72eabe2913 __clone (libc.so.6 + 0x101913)

                           Stack trace of thread 1305164:
                           #0  0x00007f72eacbdc58 futex_abstimed_wait_cancelable (libpthread.so.0 + 0x12c58)
                           #1  0x00007f72eacbdd83 __new_sem_wait_slow (libpthread.so.0 + 0x12d83)
                           #2  0x000055ada54c3d6f qemu_sem_timedwait (qemu-system-x86_64 + 0x73ed6f)
                           #3  0x000055ada54be864 worker_thread (qemu-system-x86_64 + 0x739864)
                           #4  0x000055ada54c3683 qemu_thread_start (qemu-system-x86_64 + 0x73e683)
                           #5  0x00007f72eacb4432 start_thread (libpthread.so.0 + 0x9432)
                           #6  0x00007f72eabe2913 __clone (libc.so.6 + 0x101913)

                           Stack trace of thread 1305185:
                           #0  0x00007f72eacbdc58 futex_abstimed_wait_cancelable (libpthread.so.0 + 0x12c58)
                           #1  0x00007f72eacbdd83 __new_sem_wait_slow (libpthread.so.0 + 0x12d83)
                           #2  0x000055ada54c3d6f qemu_sem_timedwait (qemu-system-x86_64 + 0x73ed6f)
                           #3  0x000055ada54be864 worker_thread (qemu-system-x86_64 + 0x739864)
                           #4  0x000055ada54c3683 qemu_thread_start (qemu-system-x86_64 + 0x73e683)
                           #5  0x00007f72eacb4432 start_thread (libpthread.so.0 + 0x9432)
                           #6  0x00007f72eabe2913 __clone (libc.so.6 + 0x101913)

                           Stack trace of thread 1305180:
                           #0  0x00007f72eacbdc58 futex_abstimed_wait_cancelable (libpthread.so.0 + 0x12c58)
                           #1  0x00007f72eacbdd83 __new_sem_wait_slow (libpthread.so.0 + 0x12d83)
                           #2  0x000055ada54c3d6f qemu_sem_timedwait (qemu-system-x86_64 + 0x73ed6f)
                           #3  0x000055ada54be864 worker_thread (qemu-system-x86_64 + 0x739864)
                           #4  0x000055ada54c3683 qemu_thread_start (qemu-system-x86_64 + 0x73e683)
                           #5  0x00007f72eacb4432 start_thread (libpthread.so.0 + 0x9432)
                           #6  0x00007f72eabe2913 __clone (libc.so.6 + 0x101913)

                           Stack trace of thread 1305186:
                           #0  0x00007f72eacbdc58 futex_abstimed_wait_cancelable (libpthread.so.0 + 0x12c58)
                           #1  0x00007f72eacbdd83 __new_sem_wait_slow (libpthread.so.0 + 0x12d83)
                           #2  0x000055ada54c3d6f qemu_sem_timedwait (qemu-system-x86_64 + 0x73ed6f)
                           #3  0x000055ada54be864 worker_thread (qemu-system-x86_64 + 0x739864)
                           #4  0x000055ada54c3683 qemu_thread_start (qemu-system-x86_64 + 0x73e683)
                           #5  0x00007f72eacb4432 start_thread (libpthread.so.0 + 0x9432)
                           #6  0x00007f72eabe2913 __clone (libc.so.6 + 0x101913)

                           Stack trace of thread 1305184:
                           #0  0x00007f72eacbdc58 futex_abstimed_wait_cancelable (libpthread.so.0 + 0x12c58)
                           #1  0x00007f72eacbdd83 __new_sem_wait_slow (libpthread.so.0 + 0x12d83)
                           #2  0x000055ada54c3d6f qemu_sem_timedwait (qemu-system-x86_64 + 0x73ed6f)
                           #3  0x000055ada54be864 worker_thread (qemu-system-x86_64 + 0x739864)
                           #4  0x000055ada54c3683 qemu_thread_start (qemu-system-x86_64 + 0x73e683)
                           #5  0x00007f72eacb4432 start_thread (libpthread.so.0 + 0x9432)
                           #6  0x00007f72eabe2913 __clone (libc.so.6 + 0x101913)

                           Stack trace of thread 1305163:
                           #0  0x00007f72eacbdc58 futex_abstimed_wait_cancelable (libpthread.so.0 + 0x12c58)
                           #1  0x00007f72eacbdd83 __new_sem_wait_slow (libpthread.so.0 + 0x12d83)
                           #2  0x000055ada54c3d6f qemu_sem_timedwait (qemu-system-x86_64 + 0x73ed6f)
                           #3  0x000055ada54be864 worker_thread (qemu-system-x86_64 + 0x739864)
                           #4  0x000055ada54c3683 qemu_thread_start (qemu-system-x86_64 + 0x73e683)
                           #5  0x00007f72eacb4432 start_thread (libpthread.so.0 + 0x9432)
                           #6  0x00007f72eabe2913 __clone (libc.so.6 + 0x101913)

                           Stack trace of thread 1305183:
                           #0  0x00007f72eacbdc58 futex_abstimed_wait_cancelable (libpthread.so.0 + 0x12c58)
                           #1  0x00007f72eacbdd83 __new_sem_wait_slow (libpthread.so.0 + 0x12d83)
                           #2  0x000055ada54c3d6f qemu_sem_timedwait (qemu-system-x86_64 + 0x73ed6f)
                           #3  0x000055ada54be864 worker_thread (qemu-system-x86_64 + 0x739864)
                           #4  0x000055ada54c3683 qemu_thread_start (qemu-system-x86_64 + 0x73e683)
                           #5  0x00007f72eacb4432 start_thread (libpthread.so.0 + 0x9432)
                           #6  0x00007f72eabe2913 __clone (libc.so.6 + 0x101913)

                           Stack trace of thread 1304974:
                           #0  0x00007f72eaba9801 __GI___clock_nanosleep (libc.so.6 + 0xc8801)
                           #1  0x00007f72eabaf157 __GI___nanosleep (libc.so.6 + 0xce157)
                           #2  0x00007f72ebed52d7 g_usleep (libglib-2.0.so.0 + 0x7e2d7)
                           #3  0x000055ada54d68a8 call_rcu_thread (qemu-system-x86_64 + 0x7518a8)
                           #4  0x000055ada54c3683 qemu_thread_start (qemu-system-x86_64 + 0x73e683)
                           #5  0x00007f72eacb4432 start_thread (libpthread.so.0 + 0x9432)
                           #6  0x00007f72eabe2913 __clone (libc.so.6 + 0x101913)

                           Stack trace of thread 1305190:
                           #0  0x00007f72eacbdc58 futex_abstimed_wait_cancelable (libpthread.so.0 + 0x12c58)
                           #1  0x00007f72eacbdd83 __new_sem_wait_slow (libpthread.so.0 + 0x12d83)
                           #2  0x000055ada54c3d6f qemu_sem_timedwait (qemu-system-x86_64 + 0x73ed6f)
                           #3  0x000055ada54be864 worker_thread (qemu-system-x86_64 + 0x739864)
                           #4  0x000055ada54c3683 qemu_thread_start (qemu-system-x86_64 + 0x73e683)
                           #5  0x00007f72eacb4432 start_thread (libpthread.so.0 + 0x9432)
                           #6  0x00007f72eabe2913 __clone (libc.so.6 + 0x101913)

                           Stack trace of thread 1305165:
                           #0  0x00007f72eacbdc58 futex_abstimed_wait_cancelable (libpthread.so.0 + 0x12c58)
                           #1  0x00007f72eacbdd83 __new_sem_wait_slow (libpthread.so.0 + 0x12d83)
                           #2  0x000055ada54c3d6f qemu_sem_timedwait (qemu-system-x86_64 + 0x73ed6f)
                           #3  0x000055ada54be864 worker_thread (qemu-system-x86_64 + 0x739864)
                           #4  0x000055ada54c3683 qemu_thread_start (qemu-system-x86_64 + 0x73e683)
                           #5  0x00007f72eacb4432 start_thread (libpthread.so.0 + 0x9432)
                           #6  0x00007f72eabe2913 __clone (libc.so.6 + 0x101913)

                           Stack trace of thread 1305188:
                           #0  0x00007f72eacbdc58 futex_abstimed_wait_cancelable (libpthread.so.0 + 0x12c58)
                           #1  0x00007f72eacbdd83 __new_sem_wait_slow (libpthread.so.0 + 0x12d83)
                           #2  0x000055ada54c3d6f qemu_sem_timedwait (qemu-system-x86_64 + 0x73ed6f)
                           #3  0x000055ada54be864 worker_thread (qemu-system-x86_64 + 0x739864)
                           #4  0x000055ada54c3683 qemu_thread_start (qemu-system-x86_64 + 0x73e683)
                           #5  0x00007f72eacb4432 start_thread (libpthread.so.0 + 0x9432)
                           #6  0x00007f72eabe2913 __clone (libc.so.6 + 0x101913)

                           Stack trace of thread 1305181:
                           #0  0x00007f72eacbdc58 futex_abstimed_wait_cancelable (libpthread.so.0 + 0x12c58)
                           #1  0x00007f72eacbdd83 __new_sem_wait_slow (libpthread.so.0 + 0x12d83)
                           #2  0x000055ada54c3d6f qemu_sem_timedwait (qemu-system-x86_64 + 0x73ed6f)
                           #3  0x000055ada54be864 worker_thread (qemu-system-x86_64 + 0x739864)
                           #4  0x000055ada54c3683 qemu_thread_start (qemu-system-x86_64 + 0x73e683)
                           #5  0x00007f72eacb4432 start_thread (libpthread.so.0 + 0x9432)
                           #6  0x00007f72eabe2913 __clone (libc.so.6 + 0x101913)

                           Stack trace of thread 1305191:
                           #0  0x00007f72eacbdc58 futex_abstimed_wait_cancelable (libpthread.so.0 + 0x12c58)
                           #1  0x00007f72eacbdd83 __new_sem_wait_slow (libpthread.so.0 + 0x12d83)
                           #2  0x000055ada54c3d6f qemu_sem_timedwait (qemu-system-x86_64 + 0x73ed6f)
                           #3  0x000055ada54be864 worker_thread (qemu-system-x86_64 + 0x739864)
                           #4  0x000055ada54c3683 qemu_thread_start (qemu-system-x86_64 + 0x73e683)
                           #5  0x00007f72eacb4432 start_thread (libpthread.so.0 + 0x9432)
                           #6  0x00007f72eabe2913 __clone (libc.so.6 + 0x101913)

                           Stack trace of thread 1304979:
                           #0  0x00007f72eabd93bb ioctl (libc.so.6 + 0xf83bb)
                           #1  0x000055ada50e7519 kvm_vcpu_ioctl (qemu-system-x86_64 + 0x362519)
                           #2  0x000055ada50e75d9 kvm_cpu_exec (qemu-system-x86_64 + 0x3625d9)
                           #3  0x000055ada50cbaac qemu_kvm_cpu_thread_fn (qemu-system-x86_64 + 0x346aac)
                           #4  0x000055ada54c3683 qemu_thread_start (qemu-system-x86_64 + 0x73e683)
                           #5  0x00007f72eacb4432 start_thread (libpthread.so.0 + 0x9432)
                           #6  0x00007f72eabe2913 __clone (libc.so.6 + 0x101913)

                           Stack trace of thread 1305187:
                           #0  0x00007f72eacbdc58 futex_abstimed_wait_cancelable (libpthread.so.0 + 0x12c58)
                           #1  0x00007f72eacbdd83 __new_sem_wait_slow (libpthread.so.0 + 0x12d83)
                           #2  0x000055ada54c3d6f qemu_sem_timedwait (qemu-system-x86_64 + 0x73ed6f)
                           #3  0x000055ada54be864 worker_thread (qemu-system-x86_64 + 0x739864)
                           #4  0x000055ada54c3683 qemu_thread_start (qemu-system-x86_64 + 0x73e683)
                           #5  0x00007f72eacb4432 start_thread (libpthread.so.0 + 0x9432)
                           #6  0x00007f72eabe2913 __clone (libc.so.6 + 0x101913)

                           Stack trace of thread 1305189:
                           #0  0x00007f72eacbdc58 futex_abstimed_wait_cancelable (libpthread.so.0 + 0x12c58)
                           #1  0x00007f72eacbdd83 __new_sem_wait_slow (libpthread.so.0 + 0x12d83)
                           #2  0x000055ada54c3d6f qemu_sem_timedwait (qemu-system-x86_64 + 0x73ed6f)
                           #3  0x000055ada54be864 worker_thread (qemu-system-x86_64 + 0x739864)
                           #4  0x000055ada54c3683 qemu_thread_start (qemu-system-x86_64 + 0x73e683)
                           #5  0x00007f72eacb4432 start_thread (libpthread.so.0 + 0x9432)
                           #6  0x00007f72eabe2913 __clone (libc.so.6 + 0x101913)

02:50:21.714358-0400 HOST kvmchand[1304148]: [INFO] daemon/libvirt.c:560: Domain fc32(67, UUID: 12345678-c061-4f3c-877a-8f429d0ac438) changed state (event: 5, detail: 5)!
02:50:21.714358-0400 HOST kvmchand[1304148]: [INFO] daemon/libvirt.c:603: Removing domain 12345678-c061-4f3c-877a-8f429d0ac438
02:50:21.714585-0400 HOST kvmchand[1304146]: [INFO] daemon/connections.c:381: Removed vchan between dom 67 (server) and dom 0, (client)

# Qubes restoring images
02:50:21.717828-0400 HOST qubesd[1304168]: Removed file: /var/lib/qubes/vm-templates/fc32/volatile-dirty.img
02:50:21.781032-0400 HOST qubesd[1304168]: Renamed file: /var/lib/qubes/vm-templates/fc32/private-dirty.img -> /var/lib/qubes/vm-templates/fc32/private.img
02:50:21.835413-0400 HOST qubesd[1304168]: Reflinked file: /var/lib/qubes/vm-templates/fc32/root.img -> /var/lib/qubes/vm-templates/fc32/root.img.52@2020-09-16T06:45:55Z~rl7wao9f
02:50:21.922713-0400 HOST qubesd[1304168]: Renamed file: /var/lib/qubes/vm-templates/fc32/root.img.52@2020-09-16T06:45:55Z~rl7wao9f -> /var/lib/qubes/vm-templates/fc32/root.img.52@2020-09-16T06:45:55Z
02:50:21.954604-0400 HOST qubesd[1304168]: Removed file: /var/lib/qubes/vm-templates/fc32/root.img.51@2020-09-16T06:13:17Z
02:50:22.136857-0400 HOST qubesd[1304168]: Renamed file: /var/lib/qubes/vm-templates/fc32/root-dirty.img -> /var/lib/qubes/vm-templates/fc32/root.img

# Crash notification
02:50:23.221919-0400 HOST abrt-notification[1305391]: Process 1304955 (qemu-system-x86_64) crashed in qemu_ram_set_idstr.cold()

# libvchan_client_init still trying to connect since it does not know VM killed :) [See issue with notes]
02:50:24.209790-0400 HOST kvmchand[1304146]: [WARN] daemon/connections.c:287: Tried to connect to non-existent vchan at dom 67 port 513
02:50:25.210032-0400 HOST kvmchand[1304146]: [WARN] daemon/connections.c:287: Tried to connect to non-existent vchan at dom 67 port 513
02:50:26.210255-0400 HOST kvmchand[1304146]: [WARN] daemon/connections.c:287: Tried to connect to non-existent vchan at dom 67 port 513
02:50:27.210451-0400 HOST kvmchand[1304146]: [WARN] daemon/connections.c:287: Tried to connect to non-existent vchan at dom 67 port 513
02:50:28.210677-0400 HOST kvmchand[1304146]: [WARN] daemon/connections.c:287: Tried to connect to non-existent vchan at dom 67 port 513
02:50:29.210894-0400 HOST kvmchand[1304146]: [WARN] daemon/connections.c:287: Tried to connect to non-existent vchan at dom 67 port 513
02:50:30.211075-0400 HOST kvmchand[1304146]: [WARN] daemon/connections.c:287: Tried to connect to non-existent vchan at dom 67 port 513
02:50:31.211288-0400 HOST kvmchand[1304146]: [WARN] daemon/connections.c:287: Tried to connect to non-existent vchan at dom 67 port 513
02:50:32.211518-0400 HOST kvmchand[1304146]: [WARN] daemon/connections.c:287: Tried to connect to non-existent vchan at dom 67 port 513
@shawnanastasio
Copy link
Owner

I can't say I've ever encountered this before. From your backtrace it looks like you're hitting a manual assertion in QEMU in the ivshmem hotplug path.

Once thing I noticed is that your VM log shows a lot of ivshmem devices being added and removed in short succession. From some investigation, the failing code in QEMU seems to be detecting whether a memory region with the same internal device path as an existing region is being added. Maybe by adding/removing devices quickly, kvmchand is somehow hitting a condition in QEMU where a new ivshmem device is added at the same device path as a recently removed device before it has been fully removed?

I'm trying to decipher the logs to determine why ivshmem devices are being added/removed so quickly, but I can't really make sense of it. I know that kvmchand will try to remove ivshmem devices if the process that created the corresponding vchan exits, so maybe there's a daemon that's starting up, creating a vchan, failing somehow and exiting, then getting immediately restarted by systemd? This could be happening on either the host or the VM. Do you think that there's anything like this that could be happening?

@nrgaway
Copy link
Contributor Author

nrgaway commented Sep 18, 2020

I will check if anything is getting immediately restarted by systemd after failing. I reviewed the log above again and noticed right before the crash when qrexec-client-vm dom0 qubes.NotifyUpdates /bin/sh -c 'echo 1' and qrexec-client-vm dom0 qubes.SyncAppMenus /etc/qubes-rpc/qubes.GetAppmenus that no vfio-pci devices were created or logged as being created. Normally after qrexec-client-vm runs I would expect logs for qubes.NotifyUpdates similar to the following:

--- HOST -----------------------------------------------------------------------
Sep 18 06:18:15 localhost.localdomain qrexec-policy-daemon[1369149]: qrexec: qubes.NotifyUpdates+: fc32 -> @adminvm: allowed to @adminvm
Sep 18 06:18:15 localhost.localdomain kvmchand[1369129]: [INFO] daemon/localhandler.c:620: Client connected! fd: 12
Sep 18 06:18:15 localhost.localdomain kvmchand[1369120]: [WARN] daemon/connections.c:287: Tried to connect to non-existent vchan at dom 11 port 513
Sep 18 06:18:15 localhost.localdomain kvmchand[1369120]: [INFO] daemon/connections.c:118: vchan_init called! server_dom: 11, client_dom: 0, port 513, read_min: 65536, write_min: 65536
Sep 18 06:18:15 localhost.localdomain kvmchand[1369122]: [INFO] daemon/libvirt.c:262: About to attach ivshmem device at /tmp/kvmchand/ivshmem_socket, index 7
Sep 18 06:18:15 localhost.localdomain kvmchand[1369125]: [INFO] daemon/ivshmem.c:534: Got connection from PID: 1369919
Sep 18 06:18:20 localhost.localdomain kvmchand[1369129]: [INFO] daemon/localhandler.c:628: Client disconnected! fd: 12

--- VM -------------------------------------------------------------------------
[  705.071199] pci 0000:00:13.0: [1af4:1110] type 00 class 0x050000
[  705.072266] pci 0000:00:13.0: reg 0x10: [mem 0x00000000-0x000000ff]
[  705.073093] pci 0000:00:13.0: reg 0x14: [mem 0x00000000-0x00000fff]
[  705.073954] pci 0000:00:13.0: reg 0x18: [mem 0x00000000-0x0003ffff 64bit pref]
[  705.075770] pci 0000:00:13.0: BAR 2: assigned [mem 0x800140000-0x80017ffff 64bit pref]
[  705.076940] pci 0000:00:13.0: BAR 1: assigned [mem 0xc8450000-0xc8450fff]
[  705.077901] pci 0000:00:13.0: BAR 0: assigned [mem 0xc8451000-0xc84510ff]
[  705.078931] pci 0000:00:02.0: PCI bridge to [bus 01]
[  705.079356] pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
[  705.080869] pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
[  705.081996] pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[  705.083764] pci 0000:00:03.0: PCI bridge to [bus 02]
[  705.084194] pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
[  705.085544] pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
[  705.086682] pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[  705.089555] vfio-pci 0000:00:13.0: Adding to iommu group 3
[  705.090098] vfio-pci 0000:00:13.0: Adding kernel taint for vfio-noiommu group on device
[  705.090831] pci 0000:00:02.0: PCI bridge to [bus 01]
[  705.091385] pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
[  705.093527] pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
[  705.094836] pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[  705.096532] pci 0000:00:03.0: PCI bridge to [bus 02]
[  705.096992] pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
[  705.098220] pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
[  705.099249] pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[  705.102371] pci_bus 0000:01: Allocating resources
[  705.102374] pci 0000:00:02.0: PCI bridge to [bus 01]
[  705.102831] pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
[  705.104102] pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
[  705.105249] pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[  705.107011] pci 0000:00:03.0: PCI bridge to [bus 02]
[  705.107444] pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
[  705.108783] pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
[  705.109918] pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[  705.114034] pci_bus 0000:02: Allocating resources
[  705.114039] pci 0000:00:02.0: PCI bridge to [bus 01]
[  705.114677] pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
[  705.116383] pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
[  705.117563] pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[  705.119124] pci 0000:00:03.0: PCI bridge to [bus 02]
[  705.119563] pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
[  705.120739] pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
[  705.122138] pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[  705.123881] pci 0000:00:02.0: PCI bridge to [bus 01]
[  705.124335] pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
[  705.125514] pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
[  705.126539] pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[  705.128154] pci 0000:00:03.0: PCI bridge to [bus 02]
[  705.128590] pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
[  705.129779] pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
[  705.130892] pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[  705.133603] pci 0000:00:02.0: PCI bridge to [bus 01]
[  705.134046] pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
[  705.135627] pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
[  705.137291] pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[  705.138925] pci 0000:00:03.0: PCI bridge to [bus 02]
[  705.139373] pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
[  705.140549] pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
[  705.141592] pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[  705.145056] pci 0000:00:02.0: PCI bridge to [bus 01]
[  705.145702] pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
[  705.147513] pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
[  705.148588] pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[  705.150120] pci 0000:00:03.0: PCI bridge to [bus 02]
[  705.150538] pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
[  705.151697] pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
[  705.152672] pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[  705.154184] pci 0000:00:02.0: PCI bridge to [bus 01]
[  705.154608] pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
[  705.156275] pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
[  705.157416] pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[  705.159051] pci 0000:00:03.0: PCI bridge to [bus 02]
[  705.159477] pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
[  705.160652] pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
[  705.161649] pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[  705.163210] pci 0000:00:02.0: PCI bridge to [bus 01]
[  705.163636] pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
[  705.164875] pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
[  705.166946] pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[  705.169307] pci 0000:00:03.0: PCI bridge to [bus 02]
[  705.169993] pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
[  705.171388] pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
[  705.172489] pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[  705.174107] pci 0000:00:02.0: PCI bridge to [bus 01]
[  705.174569] pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
[  705.175795] pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
[  705.178211] pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[  705.180040] pci 0000:00:03.0: PCI bridge to [bus 02]
[  705.180737] pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
[  705.182369] pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
[  705.183422] pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[  705.184998] pci 0000:00:02.0: PCI bridge to [bus 01]
[  705.185454] pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
[  705.186648] pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
[  705.188078] pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[  705.189891] pci 0000:00:03.0: PCI bridge to [bus 02]
[  705.190366] pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
[  705.191613] pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
[  705.192669] pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[  705.194289] pci 0000:00:02.0: PCI bridge to [bus 01]
[  705.194746] pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
[  705.195933] pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
[  705.196973] pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[  705.199615] pci 0000:00:03.0: PCI bridge to [bus 02]
[  705.200066] pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
[  705.201482] pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
[  705.202646] pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[  705.204302] pci 0000:00:02.0: PCI bridge to [bus 01]
[  705.204770] pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
[  705.205973] pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
[  705.207013] pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[  705.209845] pci 0000:00:03.0: PCI bridge to [bus 02]
[  705.210341] pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
[  705.211744] pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
[  705.213274] pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[  705.215128] pci 0000:00:02.0: PCI bridge to [bus 01]
[  705.215555] pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
[  705.216711] pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
[  705.217686] pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[  705.219175] pci 0000:00:03.0: PCI bridge to [bus 02]
[  705.219605] pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
[  705.221222] pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
[  705.222344] pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[  705.223962] pci 0000:00:02.0: PCI bridge to [bus 01]
[  705.224382] pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
[  705.225533] pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
[  705.226520] pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[  705.228011] pci 0000:00:03.0: PCI bridge to [bus 02]
[  705.228425] pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
[  705.229562] pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
[  705.230666] pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[  705.233838] pci 0000:00:02.0: PCI bridge to [bus 01]
[  705.234262] pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
[  705.235827] pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
[  705.237212] pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[  705.238711] pci 0000:00:03.0: PCI bridge to [bus 02]
[  705.239129] pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
[  705.240268] pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
[  705.241246] pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[  705.243822] pci 0000:00:02.0: PCI bridge to [bus 01]
[  705.244252] pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
[  705.245908] pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
[  705.247317] pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[  705.248815] pci 0000:00:03.0: PCI bridge to [bus 02]
[  705.249237] pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
[  705.250346] pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
[  705.251319] pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[  705.252825] pci 0000:00:02.0: PCI bridge to [bus 01]
[  705.253243] pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
[  705.255211] pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
[  705.256431] pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[  705.258674] pci 0000:00:03.0: PCI bridge to [bus 02]
[  705.259239] pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
[  705.260328] pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
[  705.261270] pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[  705.262709] pci 0000:00:02.0: PCI bridge to [bus 01]
[  705.263109] pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
[  705.264204] pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
[  705.265158] pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[  705.267157] pci 0000:00:03.0: PCI bridge to [bus 02]
[  705.267563] pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
[  705.268817] pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
[  705.269803] pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[  705.271289] pci 0000:00:02.0: PCI bridge to [bus 01]
[  705.271696] pci 0000:00:02.0:   bridge window [io  0xd000-0xdfff]
[  705.272806] pci 0000:00:02.0:   bridge window [mem 0xc8200000-0xc83fffff]
[  705.273762] pci 0000:00:02.0:   bridge window [mem 0x800200000-0x8003fffff 64bit pref]
[  705.275246] pci 0000:00:03.0: PCI bridge to [bus 02]
[  705.276383] pci 0000:00:03.0:   bridge window [io  0xc000-0xcfff]
[  705.277803] pci 0000:00:03.0:   bridge window [mem 0xc8000000-0xc81fffff]
[  705.278929] pci 0000:00:03.0:   bridge window [mem 0x800400000-0x8005fffff 64bit pref]
[  709.071944] vfio-pci 0000:00:13.0: vfio-noiommu device opened by user (kvmchand:517)
[  710.100345] vfio-pci 0000:00:13.0: Relaying device request to user (#0)
[  710.108227] vfio-pci 0000:00:13.0: Removing from iommu group 3

I also tried running SyncAppMenus on its own and have varied results:

  • crash with localhost.localdomain kvmchand[1372872]: malloc_consolidate(): invalid chunk size
  • localhost.localdomain kvmchand[1372872]: malloc_consolidate(): invalid chunk size
  • qrexec-client[1371467]: process_io.c:36:handle_vchan_error: Error while vchan read, exiting

Most likely the issue is with SyncAppMenus so I will work figuring out why it is failing and also try increasing the qrexec-client timeout alarm to something like 60 seconds (default is 2, I already increased to 10) to see if that makes any difference.

@nrgaway
Copy link
Contributor Author

nrgaway commented Sep 19, 2020

I've been trying to figure out this issue and am making some progress. I have found some other issues along the way, which when resolved, may render this one moot. I have added more debugging logs to check if anything is getting immediately restarted by systemd after failing. While I am not quite sure if systemd is restarting anything I can see by the host logs below that qrexec-policy-daemon does immediately attempt to establish connection to VM.

The following log is generated when running qrexec-client-vm dom0 qubes.SyncAppMenus /etc/qubes-rpc/qubes.GetAppmenus on the VM which calls the host qubes.SyncAppMenus RPC script. The host script then connects to back the VM to the qubes.GetAppmenus` script which will pass the contents of all the .desktop files on the VM back to the host. Once the list is received by the host, the host will reconnect to the VM once per every .desktop file (~94 times) to get the Icons.

06:04:25 HOST qrexec-policy-daemon[1524778]: qrexec: qubes.SyncAppMenus+: fc32 -> @adminvm: allowed to @adminvm
06:04:25 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:25.951 qrexec-client[1525770]: qrexec-client.c:611:main: client_init: Connecting to 45, port=513, timeout=60
06:04:25 HOST kvmchand[1524758]: [INFO] daemon/HOSTlient connected! fd: 12
06:04:25 HOST kvmchand[1524749]: [WARN] daemon/connections.c:287: Tried to connect to non-existent vchan at dom 45 port 513
06:04:25 HOST kvmchand[1524749]: [INFO] daemon/connections.c:118: vchan_init called! server_dom: 45, client_dom: 0, port 513, read_min: 65536, write_min: 65536
06:04:25 HOST kvmchand[1524751]: [INFO] daemon/libvirt.c:262: About to attach ivshmem device at /tmp/kvmchand/ivshmem_socket, index 5
06:04:25 HOST kvmchand[1524753]: [INFO] daemon/ivshmem.c:534: Got connection from PID: 1525656
06:04:30 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:30.955 qrexec-client[1525770]: qrexec-client.c:630:main: data_vchan connected
06:04:30 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:30.955 qrexec-client[1525770]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=8
06:04:30 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:30.955 qrexec-client[1525770]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=8, size=8
06:04:30 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:30.955 qrexec-client[1525770]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=4
06:04:30 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:30.955 qrexec-client[1525770]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=4, size=4
06:04:30 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:30.955 qrexec-client[1525770]: qrexec-client.c:632:main: data_protocol_version: 3
06:04:30 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:30.955 qrexec-client[1525770]: qrexec-client.c:635:main: prepare_ret: 0
06:04:30 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:30.955 qrexec-client[1525770]: qrexec-client.c:638:main: select_loop...
06:04:31 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:31.455 qrexec-client[1525770]: remote.c:65:handle_remote_data: handle_remote_data: while loop
06:04:31 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:31.455 qrexec-client[1525770]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=65536
06:04:31 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:31.455 qrexec-client[1525770]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=65536, size=65536
06:04:31 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:31.455 qrexec-client[1525770]: remote.c:65:handle_remote_data: handle_remote_data: while loop
06:04:31 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:31.455 qrexec-client[1525770]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=65491
06:04:31 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:31.455 qrexec-client[1525770]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=65490, size=65491
06:04:31 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:31.455 qrexec-client[1525770]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=65490, ret=1, size=65491
06:04:40 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:40.967 qrexec-client[1525770]: remote.c:65:handle_remote_data: handle_remote_data: while loop
06:04:40 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:40.967 qrexec-client[1525770]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=65536
06:04:40 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:40.967 qrexec-client[1525770]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=-1, size=65536
06:04:40 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:40.967 qrexec-client[1525770]: txrx-vchan.c:81:read_vchan_all: read_vchan_all: ret < 0, ret=-1
06:04:40 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:40.967 qrexec-client[1525770]: remote.c:76:handle_remote_data: handle_remote_data: !read_vchan_all
06:04:40 HOST qrexec-policy-daemon[1525770]: 2020-09-19 06:04:40.967 qrexec-client[1525770]: process_io.c:36:handle_vchan_error: Error while vchan read, exiting
06:04:40 HOST kvmchand[1524758]: [INFO] daemon/localhandler.c:628: Client disconnected! fd: 12

06:04:40 HOST qrexec-policy-daemon[1524778]: error calling qrexec-policy-agent in dom0
06:04:40 HOST qrexec-policy-daemon[1524778]: Traceback (most recent call last):
06:04:40 HOST qrexec-policy-daemon[1524778]:   File "/usr/lib/python3.8/site-packages/qrexec/tools/qrexec_policy_exec.py", line 133, in execute
06:04:40 HOST qrexec-policy-daemon[1524778]:     await super().execute(caller_ident)
06:04:40 HOST qrexec-policy-daemon[1524778]:   File "/usr/lib/python3.8/site-packages/qrexec/policy/parser.py", line 528, in execute
06:04:40 HOST qrexec-policy-daemon[1524778]:     raise ExecutionFailed('qrexec-client failed: {}'.format(command))
06:04:40 HOST qrexec-policy-daemon[1524778]: qrexec.exc.ExecutionFailed: qrexec-client failed: ['/usr/lib/qubes/qrexec-client', '-d', '@adminvm', '-c', 'SOCKET10,fc32,45', '-E', 'QUBESRPC qubes.SyncAppMenus+ fc32 keyword adminvm']
06:04:40 HOST qrexec-policy-daemon[1524778]: During handling of the above exception, another exception occurred:
06:04:40 HOST qrexec-policy-daemon[1524778]: Traceback (most recent call last):
06:04:40 HOST qrexec-policy-daemon[1524778]:   File "/usr/lib/python3.8/site-packages/qrexec/tools/qrexec_policy_exec.py", line 151, in notify
06:04:40 HOST qrexec-policy-daemon[1524778]:     await call_socket_service(guivm, service, source_domain, params)
06:04:40 HOST qrexec-policy-daemon[1524778]:   File "/usr/lib/python3.8/site-packages/qrexec/server.py", line 105, in call_socket_service_local
06:04:40 HOST qrexec-policy-daemon[1524778]:     reader, writer = await asyncio.open_unix_connection(path)
06:04:40 HOST qrexec-policy-daemon[1524778]:   File "/usr/lib64/python3.8/asyncio/streams.py", line 111, in open_unix_connection
06:04:40 HOST qrexec-policy-daemon[1524778]:     transport, _ = await loop.create_unix_connection(
06:04:40 HOST qrexec-policy-daemon[1524778]:   File "/usr/lib64/python3.8/asyncio/unix_events.py", line 244, in create_unix_connection
06:04:40 HOST qrexec-policy-daemon[1524778]:     await self.sock_connect(sock, path)
06:04:40 HOST qrexec-policy-daemon[1524778]:   File "/usr/lib64/python3.8/asyncio/selector_events.py", line 494, in sock_connect
06:04:40 HOST qrexec-policy-daemon[1524778]:     return await fut
06:04:40 HOST qrexec-policy-daemon[1524778]:   File "/usr/lib64/python3.8/asyncio/selector_events.py", line 499, in _sock_connect
06:04:40 HOST qrexec-policy-daemon[1524778]:     sock.connect(address)
06:04:40 HOST qrexec-policy-daemon[1524778]: FileNotFoundError: [Errno 2] No such file or directory
06:04:40 HOST qrexec-policy-daemon[1524778]: qrexec: qubes.SyncAppMenus: fc32 -> dom0: error while executing: qrexec-client failed: ['/usr/lib/qubes/qrexec-client', '-d', '@adminvm', '-c', 'SOCKET10,fc32,45', '-E', 'QUBESRPC qubes.SyncAppMenus+ fc32 keyword adminvm']

# These are started right after the failure which I am guessing is actually causing the crash
06:04:40 HOST qubes.SyncAppMenus+-fc32[1525797]: fc32: Updating gnome-system-monitor-kde
06:04:40 HOST qrexec-policy-daemon[1525795]: fc32: Updating gnome-system-monitor-kde
06:04:40 HOST kvmchand[1524758]: [INFO] daemon/localhandler.c:620: Client connected! fd: 12
06:04:40 HOST kvmchand[1524749]: [INFO] daemon/connections.c:118: vchan_init called! server_dom: 0, client_dom: 45, port 514, read_min: 65536, write_min: 65536
06:04:40 HOST kvmchand[1524751]: [INFO] daemon/libvirt.c:262: About to attach ivshmem device at /tmp/kvmchand/ivshmem_socket, index 6
06:04:40 HOST kvmchand[1524753]: [INFO] daemon/ivshmem.c:534: Got connection from PID: 1525656
06:04:47 HOST kvmchand[1524758]: [INFO] daemon/localhandler.c:628: Client disconnected! fd: 12

06:04:47 HOST qrexec-policy-daemon[1525795]: fc32: Updating org.gnome.clocks
06:04:47 HOST qubes.SyncAppMenus+-fc32[1525797]: fc32: Updating org.gnome.clocks
06:04:47 HOST kvmchand[1524758]: [INFO] daemon/localhandler.c:620: Client connected! fd: 12
06:04:47 HOST kvmchand[1524749]: [INFO] daemon/connections.c:118: vchan_init called! server_dom: 0, client_dom: 45, port 514, read_min: 65536, write_min: 65536
06:04:47 HOST kvmchand[1524751]: [INFO] daemon/libvirt.c:262: About to attach ivshmem device at /tmp/kvmchand/ivshmem_socket, index 7
06:04:47 HOST kvmchand[1524753]: [INFO] daemon/ivshmem.c:534: Got connection from PID: 1525656

06:04:47 HOST audit[1525656]: ANOM_ABEND auid=4294967295 uid=107 gid=107 ses=4294967295 pid=1525656 comm="qemu-system-x86" exe="/usr/bin/qemu-system-x86_64" sig=6 res=1

@shawnanastasio
Copy link
Owner

So with the patch from #23 applied, are you still seeing similar behavior where vchans are created and destroyed in short succession? Maybe I'll need to change kvmchand to be more lazy with ivshmem device removal so that devices don't get removed until one of the domains shuts down or N amount of vchans are closed to work around this.

@nrgaway
Copy link
Contributor Author

nrgaway commented Sep 24, 2020

With #23 applied the behaviour is similar. The difference being is that SyncAppMenus does not fail when receiving list of application .desktop files. Then when connecting guest to get icons, the first icon is received, and crash is triggered when attempting a get second icon.

EDIT: Replaced logs to contain QEMU stack trace and added kvmchand backtrace (forgot to set prlimit for qemu-system-x86_64).

Following is mostly a full host log in relation to the issue:

22:38:06 HOST qrexec-policy-daemon[73188]: qrexec: qubes.SyncAppMenus+: fc32 -> @adminvm: allowed to @adminvm
22:38:06 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:06.603 qrexec-client[76114]: qrexec-client.c:611:main: client_init: Connecting to 7, port=513, timeout=60
22:38:06 HOST kvmchand[73169]: [INFO] daemon/localhandler.c:620: Client connected! fd: 12
22:38:06 HOST kvmchand[73159]: [WARN] daemon/connections.c:287: Tried to connect to non-existent vchan at dom 7 port 513
22:38:06 HOST kvmchand[73159]: [INFO] daemon/connections.c:118: vchan_init called! server_dom: 7, client_dom: 0, port 513, read_min: 65536, write_min: 65536
22:38:06 HOST kvmchand[73161]: [INFO] daemon/libvirt.c:262: About to attach ivshmem device at /tmp/kvmchand/ivshmem_socket, index 6
22:38:06 HOST kvmchand[73165]: [INFO] daemon/ivshmem.c:534: Got connection from PID: 75117
22:38:11 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:11.607 qrexec-client[76114]: qrexec-client.c:630:main: data_vchan connected
22:38:11 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:11.607 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=8
22:38:11 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:11.607 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=8, size=8
22:38:11 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:11.607 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=4
22:38:11 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:11.607 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=4, size=4
22:38:11 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:11.607 qrexec-client[76114]: qrexec-client.c:632:main: data_protocol_version: 3
22:38:11 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:11.607 qrexec-client[76114]: qrexec-client.c:635:main: prepare_ret: 0
22:38:11 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:11.607 qrexec-client[76114]: qrexec-client.c:638:main: select_loop...
22:38:12 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:12.107 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:12 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:12.107 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=65536
22:38:12 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:12.107 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=65536, size=65536
22:38:12 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:12.107 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:12 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:12.107 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=65491
22:38:12 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:12.107 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=65491, size=65491
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.618 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.618 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=65536
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.618 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=65536, size=65536
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.618 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.618 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=20525
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.618 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=20525, size=20525
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.624 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.624 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=24576
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.624 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=24576, size=24576
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.625 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.626 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=16384
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.626 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=16384, size=16384
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.627 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.627 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=8192
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.627 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=8192, size=8192
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.628 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.628 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=28672
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.628 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=28672, size=28672
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.628 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.628 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=20480
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.628 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=20480, size=20480
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.628 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.629 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=24576
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.629 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=24576, size=24576
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.629 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.629 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=12288
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.629 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=12288, size=12288
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.632 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.632 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=28672
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.632 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=28672, size=28672
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.632 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.632 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=20480
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.632 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=20480, size=20480
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.634 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.634 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=36864
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.634 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=36864, size=36864
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.635 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.635 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=16384
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.635 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=16384, size=16384
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.636 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.636 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=4096
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.636 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=4096, size=4096
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.637 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.637 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=24576
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.637 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=24576, size=24576
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.637 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.637 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=24576
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.637 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=24576, size=24576
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.639 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.639 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=32768
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.639 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=32768, size=32768
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.639 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.639 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=28672
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.640 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=28672, size=28672
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.640 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.640 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=28672
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.640 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=28672, size=28672
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.642 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.642 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=28672
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.642 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=28672, size=28672
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.643 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.643 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=20480
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.643 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=20480, size=20480
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.643 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.643 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=12288
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.643 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=12288, size=12288
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.643 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.644 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=20480
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.644 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=20480, size=20480
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.644 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.644 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=20480
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.644 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=20480, size=20480
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.645 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.645 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=16384
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.645 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=16384, size=16384
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.645 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.645 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=20480
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.645 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=20480, size=20480
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.645 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.645 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=20480
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.645 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=20480, size=20480
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.645 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.645 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=32768
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.646 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=32768, size=32768
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.646 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.646 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=28672
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.646 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=28672, size=28672
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.647 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.647 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=34418
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.647 qrexec-client[76114]: txrx-vchan.c:79:read_vchan_all: read_vchan_all: while_loop: pos=0, ret=34418, size=34418
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.647 qrexec-client[76114]: remote.c:65:handle_remote_data: handle_remote_data: while loop
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.647 qrexec-client[76114]: txrx-vchan.c:76:read_vchan_all: read_vchan_all: pos=0, size=0
22:38:21 HOST qrexec-policy-daemon[76114]: 2020-09-23 22:38:21.647 qrexec-client[76114]: remote.c:92:handle_remote_data: handle_remote_data: hdr.len==0, REMOTE_EOF

# SyncAppMenus - Get icon: First icon transfered, Second triggers coredump?
22:38:21 HOST qubes.SyncAppMenus+-fc32[76150]: Warning: ignoring key 'Comment' of org.keepassxc.KeePassXC
22:38:21 HOST qrexec-policy-daemon[76148]: Warning: ignoring key 'Comment' of org.keepassxc.KeePassXC
22:38:21 HOST qrexec-policy-daemon[76148]: fc32: Updating gnome-system-monitor-kde
22:38:21 HOST qrexec-policy-daemon[76148]: fc32: Get icon: vm=<qubesadmin.vm.QubesVM object at 0x7f1ad5f55730>, icon=org.gnome.SystemMonitor
22:38:21 HOST qubes.SyncAppMenus+-fc32[76150]: fc32: Updating gnome-system-monitor-kde
22:38:21 HOST qubes.SyncAppMenus+-fc32[76150]: fc32: Get icon: vm=<qubesadmin.vm.QubesVM object at 0x7f1ad5f55730>, icon=org.gnome.SystemMonitor
22:38:21 HOST kvmchand[73169]: [INFO] daemon/localhandler.c:620: Client connected! fd: 13
22:38:21 HOST kvmchand[73159]: [INFO] daemon/connections.c:118: vchan_init called! server_dom: 0, client_dom: 7, port 514, read_min: 65536, write_min: 65536
22:38:21 HOST kvmchand[73161]: [INFO] daemon/libvirt.c:262: About to attach ivshmem device at /tmp/kvmchand/ivshmem_socket, index 7
22:38:21 HOST kvmchand[73165]: [INFO] daemon/ivshmem.c:534: Got connection from PID: 75117
22:38:27 HOST kvmchand[73169]: [INFO] daemon/localhandler.c:628: Client disconnected! fd: 13
22:38:27 HOST qrexec-policy-daemon[76148]: fc32: Got icon
22:38:27 HOST qubes.SyncAppMenus+-fc32[76150]: fc32: Got icon

# Second icon
22:38:27 HOST qrexec-policy-daemon[76148]: fc32: Updating org.gnome.clocks
22:38:27 HOST qrexec-policy-daemon[76148]: fc32: Get icon: vm=<qubesadmin.vm.QubesVM object at 0x7f1ad5f55730>, icon=org.gnome.clocks
22:38:27 HOST qubes.SyncAppMenus+-fc32[76150]: fc32: Updating org.gnome.clocks
22:38:27 HOST qubes.SyncAppMenus+-fc32[76150]: fc32: Get icon: vm=<qubesadmin.vm.QubesVM object at 0x7f1ad5f55730>, icon=org.gnome.clocks
22:38:27 HOST kvmchand[73169]: [INFO] daemon/localhandler.c:620: Client connected! fd: 13
22:38:27 HOST kvmchand[73159]: [INFO] daemon/connections.c:118: vchan_init called! server_dom: 0, client_dom: 7, port 514, read_min: 65536, write_min: 65536
22:38:27 HOST kvmchand[73161]: [INFO] daemon/libvirt.c:262: About to attach ivshmem device at /tmp/kvmchand/ivshmem_socket, index 8
22:38:27 HOST kvmchand[73165]: [INFO] daemon/ivshmem.c:534: Got connection from PID: 75117

# Coredump start
22:38:27 HOST audit[75117]: ANOM_ABEND auid=4294967295 uid=107 gid=107 ses=4294967295 pid=75117 comm="qemu-system-x86" exe="/usr/bin/qemu-system-x86_64" sig=6 res=1
22:38:28 HOST systemd[1]: Started Process Core Dump (PID 76207/UID 0).
22:38:28 HOST audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-coredump@2-76207-0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'

22:38:28 HOST kvmchand[73165]: [ERROR] daemon/ivshmem.c:905: Received unknown message from client on fd 41
22:38:28 HOST kvmchand[73165]: [INFO] daemon/ivshmem.c:910: Client disconnected! fd: 35
22:38:28 HOST kvmchand[73165]: [INFO] daemon/ivshmem.c:910: Client disconnected! fd: 29
22:38:28 HOST kvmchand[73165]: [INFO] daemon/ivshmem.c:910: Client disconnected! fd: 41
22:38:28 HOST kvmchand[73165]: [INFO] daemon/ivshmem.c:910: Client disconnected! fd: 23
22:38:28 HOST kvmchand[73165]: [INFO] daemon/ivshmem.c:910: Client disconnected! fd: 10
22:38:28 HOST kvmchand[73165]: [INFO] daemon/ivshmem.c:501: Stopping listener thread.
22:38:28 HOST libvirtd[2728]: Unable to read from monitor: Connection reset by peer
22:38:28 HOST kvmchand[73161]: libvirt: QEMU Driver error : Unable to read from monitor: Connection reset by peer
22:38:28 HOST kvmchand[73161]: [71B blob data]
22:38:28 HOST kvmchand[73161]: [ERROR] daemon/libvirt.c:710: Failed to attach ivshmem to id 7!
22:38:28 HOST kvmchand[73159]: [ERROR] daemon/connections.c:238: Failed to attach ivshmem device to client domain 7.
22:38:28 HOST kvmchand[73169]: [INFO] daemon/localhandler.c:628: Client disconnected! fd: 13

22:38:28 HOST qrexec-policy-daemon[76148]: fc32: Failed to get icon for org.gnome.clocks: No icon received
22:38:28 HOST qubes.SyncAppMenus+-fc32[76150]: fc32: Failed to get icon for org.gnome.clocks: No icon received
22:38:28 HOST qrexec-policy-daemon[76148]: fc32: Updating exo-mail-reader
22:38:28 HOST qubes.SyncAppMenus+-fc32[76150]: fc32: Updating exo-mail-reader
22:38:28 HOST qubes.SyncAppMenus+-fc32[76150]: fc32: Get icon: vm=<qubesadmin.vm.QubesVM object at 0x7f1ad5f55730>, icon=emblem-mail
22:38:28 HOST qrexec-policy-daemon[76148]: fc32: Get icon: vm=<qubesadmin.vm.QubesVM object at 0x7f1ad5f55730>, icon=emblem-mail

# --- QEMU COREDUMP ------------------------------------------------------------
22:38:28 HOST systemd-machined[2431]: Machine qemu-7-fc32 terminated.
22:38:28 HOST systemd-coredump[76208]: Process 75117 (qemu-system-x86) of user 107 dumped core.

                  Stack trace of thread 75117:
                  #0  0x00007f9ef00839e5 __GI_raise (libc.so.6 + 0x3c9e5)
                  #1  0x00007f9ef006c895 __GI_abort (libc.so.6 + 0x25895)
                  #2  0x00005585f614d474 qemu_ram_set_idstr (qemu-system-x86_64 + 0x2f1474)
                  #3  0x00005585f6437fa5 vmstate_register_ram (qemu-system-x86_64 + 0x5dbfa5)
                  #4  0x00005585f639281a ivshmem_common_realize (qemu-system-x86_64 + 0x53681a)
                  #5  0x00005585f63c2573 pci_qdev_realize (qemu-system-x86_64 + 0x566573)
                  #6  0x00005585f6332b55 device_set_realized (qemu-system-x86_64 + 0x4d6b55)
                  #7  0x00005585f64a257b property_set_bool (qemu-system-x86_64 + 0x64657b)
                  #8  0x00005585f64a78b4 object_property_set_qobject (qemu-system-x86_64 + 0x64b8b4)
                  #9  0x00005585f64a52ba object_property_set_bool (qemu-system-x86_64 + 0x6492ba)
                  #10 0x00005585f62c7f48 qdev_device_add (qemu-system-x86_64 + 0x46bf48)
                  #11 0x00005585f62c82ab qmp_device_add (qemu-system-x86_64 + 0x46c2ab)
                  #12 0x00005585f654b2c0 do_qmp_dispatch (qemu-system-x86_64 + 0x6ef2c0)
                  #13 0x00005585f64496ed monitor_qmp_dispatch (qemu-system-x86_64 + 0x5ed6ed)
                  #14 0x00005585f6449dac monitor_qmp_bh_dispatcher (qemu-system-x86_64 + 0x5eddac)
                  #15 0x00005585f6594dac aio_bh_call (qemu-system-x86_64 + 0x738dac)
                  #16 0x00005585f6598494 aio_dispatch (qemu-system-x86_64 + 0x73c494)
                  #17 0x00005585f6594a32 aio_ctx_dispatch (qemu-system-x86_64 + 0x738a32)
                  #18 0x00007f9ef141d78f g_main_dispatch (libglib-2.0.so.0 + 0x5278f)
                  #19 0x00005585f65976b8 glib_pollfds_poll (qemu-system-x86_64 + 0x73b6b8)
                  #20 0x00005585f62cc2af main_loop (qemu-system-x86_64 + 0x4702af)
                  #21 0x00005585f615379c main (qemu-system-x86_64 + 0x2f779c)
                  #22 0x00007f9ef006e042 __libc_start_main (libc.so.6 + 0x27042)
                  #23 0x00005585f6156afe _start (qemu-system-x86_64 + 0x2faafe)

                  Stack trace of thread 75131:
                  #0  0x00007f9ef0224750 __lll_lock_wait (libpthread.so.0 + 0x13750)
                  #1  0x00007f9ef021ce53 __GI___pthread_mutex_lock (libpthread.so.0 + 0xbe53)
                  #2  0x00005585f659a76d qemu_mutex_lock_impl (qemu-system-x86_64 + 0x73e76d)
                  #3  0x00005585f61a280e qemu_mutex_lock_iothread_impl (qemu-system-x86_64 + 0x34680e)
                  #4  0x00005585f65ad8e6 call_rcu_thread (qemu-system-x86_64 + 0x7518e6)
                  #5  0x00005585f659a683 qemu_thread_start (qemu-system-x86_64 + 0x73e683)
                  #6  0x00007f9ef021a432 start_thread (libpthread.so.0 + 0x9432)
                  #7  0x00007f9ef0148913 __clone (libc.so.6 + 0x101913)

                  Stack trace of thread 75140:
                  #0  0x00007f9ef013daaf __GI___poll (libc.so.6 + 0xf6aaf)
                  #1  0x00007f9ef141daae g_main_context_poll (libglib-2.0.so.0 + 0x52aae)
                  #2  0x00007f9ef141de33 g_main_loop_run (libglib-2.0.so.0 + 0x52e33)
                  #3  0x00007f9ef0cd62db red_worker_main (libspice-server.so.1 + 0x542db)
                  #4  0x00007f9ef021a432 start_thread (libpthread.so.0 + 0x9432)
                  #5  0x00007f9ef0148913 __clone (libc.so.6 + 0x101913)

                  Stack trace of thread 75137:
                  #0  0x00007f9ef013f3bb ioctl (libc.so.6 + 0xf83bb)
                  #1  0x00005585f61be519 kvm_vcpu_ioctl (qemu-system-x86_64 + 0x362519)
                  #2  0x00005585f61be5d9 kvm_cpu_exec (qemu-system-x86_64 + 0x3625d9)
                  #3  0x00005585f61a2aac qemu_kvm_cpu_thread_fn (qemu-system-x86_64 + 0x346aac)
                  #4  0x00005585f659a683 qemu_thread_start (qemu-system-x86_64 + 0x73e683)
                  #5  0x00007f9ef021a432 start_thread (libpthread.so.0 + 0x9432)
                  #6  0x00007f9ef0148913 __clone (libc.so.6 + 0x101913)

                  Stack trace of thread 75135:
                  #0  0x00007f9ef013daaf __GI___poll (libc.so.6 + 0xf6aaf)
                  #1  0x00007f9ef141daae g_main_context_poll (libglib-2.0.so.0 + 0x52aae)
                  #2  0x00007f9ef141de33 g_main_loop_run (libglib-2.0.so.0 + 0x52e33)
                  #3  0x00005585f62c5fb1 iothread_run (qemu-system-x86_64 + 0x469fb1)
                  #4  0x00005585f659a683 qemu_thread_start (qemu-system-x86_64 + 0x73e683)
                  #5  0x00007f9ef021a432 start_thread (libpthread.so.0 + 0x9432)
                  #6  0x00007f9ef0148913 __clone (libc.so.6 + 0x101913)

                  Stack trace of thread 76112:
                  #0  0x00007f9ef0223c58 futex_abstimed_wait_cancelable (libpthread.so.0 + 0x12c58)
                  #1  0x00007f9ef0223d83 __new_sem_wait_slow (libpthread.so.0 + 0x12d83)
                  #2  0x00005585f659ad6f qemu_sem_timedwait (qemu-system-x86_64 + 0x73ed6f)
                  #3  0x00005585f6595864 worker_thread (qemu-system-x86_64 + 0x739864)
                  #4  0x00005585f659a683 qemu_thread_start (qemu-system-x86_64 + 0x73e683)
                  #5  0x00007f9ef021a432 start_thread (libpthread.so.0 + 0x9432)
                  #6  0x00007f9ef0148913 __clone (libc.so.6 + 0x101913)

                  Stack trace of thread 75136:
                  #0  0x00007f9ef013f3bb ioctl (libc.so.6 + 0xf83bb)
                  #1  0x00005585f61be519 kvm_vcpu_ioctl (qemu-system-x86_64 + 0x362519)
                  #2  0x00005585f61be5d9 kvm_cpu_exec (qemu-system-x86_64 + 0x3625d9)
                  #3  0x00005585f61a2aac qemu_kvm_cpu_thread_fn (qemu-system-x86_64 + 0x346aac)
                  #4  0x00005585f659a683 qemu_thread_start (qemu-system-x86_64 + 0x73e683)
                  #5  0x00007f9ef021a432 start_thread (libpthread.so.0 + 0x9432)
                  #6  0x00007f9ef0148913 __clone (libc.so.6 + 0x101913)

                  Stack trace of thread 76118:
                  #0  0x00007f9ef0223c58 futex_abstimed_wait_cancelable (libpthread.so.0 + 0x12c58)
                  #1  0x00007f9ef0223d83 __new_sem_wait_slow (libpthread.so.0 + 0x12d83)
                  #2  0x00005585f659ad6f qemu_sem_timedwait (qemu-system-x86_64 + 0x73ed6f)
                  #3  0x00005585f6595864 worker_thread (qemu-system-x86_64 + 0x739864)
                  #4  0x00005585f659a683 qemu_thread_start (qemu-system-x86_64 + 0x73e683)
                  #5  0x00007f9ef021a432 start_thread (libpthread.so.0 + 0x9432)
                  #6  0x00007f9ef0148913 __clone (libc.so.6 + 0x101913)

                  Stack trace of thread 76199:
                  #0  0x00007f9ef0223c58 futex_abstimed_wait_cancelable (libpthread.so.0 + 0x12c58)
                  #1  0x00007f9ef0223d83 __new_sem_wait_slow (libpthread.so.0 + 0x12d83)
                  #2  0x00005585f659ad6f qemu_sem_timedwait (qemu-system-x86_64 + 0x73ed6f)
                  #3  0x00005585f6595864 worker_thread (qemu-system-x86_64 + 0x739864)
                  #4  0x00005585f659a683 qemu_thread_start (qemu-system-x86_64 + 0x73e683)
                  #5  0x00007f9ef021a432 start_thread (libpthread.so.0 + 0x9432)
                  #6  0x00007f9ef0148913 __clone (libc.so.6 + 0x101913)

22:38:28 HOST audit[2728]: VIRT_CONTROL pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm op=stop reason=failed vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 vm-pid=-1 exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:28 HOST kvmchand[73161]: [INFO] daemon/libvirt.c:560: Domain fc32(7, UUID: 12345678-c061-4f3c-877a-8f429d0ac438) changed state (event: 5, detail: 5)!
22:38:28 HOST kvmchand[73161]: [INFO] daemon/libvirt.c:603: Removing domain 12345678-c061-4f3c-877a-8f429d0ac438
22:38:28 HOST kvmchand[73159]: [INFO] daemon/connections.c:381: Removed vchan between dom 7 (server) and dom 0, (client)
22:38:28 HOST systemd[1]: systemd-coredump@2-76207-0.service: Succeeded.

# Qubes - Restore guest images
22:38:28 HOST qubesd[73181]: Removed file: /var/lib/qubes/vm-templates/fc32/volatile-dirty.img
22:38:28 HOST qubesd[73181]: Reflinked file: /var/lib/qubes/vm-templates/fc32/root.img -> /var/lib/qubes/vm-templates/fc32/root.img.55@2020-09-24T01:32:42Z~xi94c9sd
22:38:28 HOST qubesd[73181]: Renamed file: /var/lib/qubes/vm-templates/fc32/private-dirty.img -> /var/lib/qubes/vm-templates/fc32/private.img
22:38:28 HOST qubesd[73181]: Renamed file: /var/lib/qubes/vm-templates/fc32/root.img.55@2020-09-24T01:32:42Z~xi94c9sd -> /var/lib/qubes/vm-templates/fc32/root.img.55@2020-09-24T01:32:42Z
22:38:28 HOST qubesd[73181]: Removed file: /var/lib/qubes/vm-templates/fc32/root.img.54@2020-09-24T01:32:26Z
22:38:28 HOST qubesd[73181]: Renamed file: /var/lib/qubes/vm-templates/fc32/root-dirty.img -> /var/lib/qubes/vm-templates/fc32/root.img

# --- GUEST RESTARTING ---------------------------------------------------------
22:38:28 HOST qubesd[73181]: Starting fc32
22:38:28 HOST qubesd[73181]: Created sparse file: /var/lib/qubes/vm-templates/fc32/volatile-dirty.img~9cgp4bs8
22:38:28 HOST qubesd[73181]: Renamed file: /var/lib/qubes/vm-templates/fc32/volatile-dirty.img~9cgp4bs8 -> /var/lib/qubes/vm-templates/fc32/volatile-dirty.img
22:38:28 HOST qubesd[73181]: Reflinked file: /var/lib/qubes/vm-templates/fc32/private.img -> /var/lib/qubes/vm-templates/fc32/private-dirty.img~z690m1sb
22:38:28 HOST qubesd[73181]: Renamed file: /var/lib/qubes/vm-templates/fc32/private-dirty.img~z690m1sb -> /var/lib/qubes/vm-templates/fc32/private-dirty.img
22:38:28 HOST qubesd[73181]: Reflinked file: /var/lib/qubes/vm-templates/fc32/root.img -> /var/lib/qubes/vm-templates/fc32/root-dirty.img~s8_46lbr
22:38:28 HOST qubesd[73181]: Renamed file: /var/lib/qubes/vm-templates/fc32/root-dirty.img~s8_46lbr -> /var/lib/qubes/vm-templates/fc32/root-dirty.img
22:38:28 HOST kvmchand[73161]: [INFO] daemon/libvirt.c:560: Domain fc32(-1, UUID: 12345678-c061-4f3c-877a-8f429d0ac438) changed state (event: 0, detail: 1)!
22:38:28 HOST kvmchand[73161]: [WARN] daemon/libvirt.c:634: Unknown lifecycle event 10! Ignoring...

22:38:29 HOST systemd-machined[2431]: New machine qemu-8-fc32.
22:38:29 HOST systemd[1]: Started Virtual Machine qemu-8-fc32.
22:38:29 HOST audit: BPF prog-id=65 op=LOAD
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=cgroup reason=deny vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 cgroup="/sys/fs/cgroup/machine.slice/machine-qemu\x2d8\x2dfc32.scope/" class=all exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=cgroup reason=allow vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 cgroup="/sys/fs/cgroup/machine.slice/machine-qemu\x2d8\x2dfc32.scope/" class=major category=pty maj=88 acl=rw exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=cgroup reason=allow vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 cgroup="/sys/fs/cgroup/machine.slice/machine-qemu\x2d8\x2dfc32.scope/" class=path path="/dev/null" rdev=01:03 acl=rw exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=cgroup reason=allow vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 cgroup="/sys/fs/cgroup/machine.slice/machine-qemu\x2d8\x2dfc32.scope/" class=path path="/dev/full" rdev=01:07 acl=rw exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=cgroup reason=allow vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 cgroup="/sys/fs/cgroup/machine.slice/machine-qemu\x2d8\x2dfc32.scope/" class=path path="/dev/zero" rdev=01:05 acl=rw exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=cgroup reason=allow vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 cgroup="/sys/fs/cgroup/machine.slice/machine-qemu\x2d8\x2dfc32.scope/" class=path path="/dev/random" rdev=01:08 acl=rw exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=cgroup reason=allow vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 cgroup="/sys/fs/cgroup/machine.slice/machine-qemu\x2d8\x2dfc32.scope/" class=path path="/dev/urandom" rdev=01:09 acl=rw exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=cgroup reason=allow vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 cgroup="/sys/fs/cgroup/machine.slice/machine-qemu\x2d8\x2dfc32.scope/" class=path path="/dev/ptmx" rdev=05:02 acl=rw exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=cgroup reason=allow vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 cgroup="/sys/fs/cgroup/machine.slice/machine-qemu\x2d8\x2dfc32.scope/" class=path path="/dev/kvm" rdev=0A:E8 acl=rw exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=cgroup reason=allow vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 cgroup="/sys/fs/cgroup/machine.slice/machine-qemu\x2d8\x2dfc32.scope/" class=path path="/dev/urandom" rdev=01:09 acl=rw exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=disk reason=start vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 old-disk="?" new-disk="/var/lib/qubes/vm-templates/fc32/root-dirty.img" exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=disk reason=start vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 old-disk="?" new-disk="/var/lib/qubes/vm-templates/fc32/private-dirty.img" exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=disk reason=start vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 old-disk="?" new-disk="/var/lib/qubes/vm-templates/fc32/volatile-dirty.img" exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=disk reason=start vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 old-disk="?" new-disk="/var/lib/qubes/vm-kernels/5.6.16-1/modules.img" exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=fs reason=start vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 old-fs="?" new-fs="/home/kvm/share" exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=fs reason=start vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 old-fs="?" new-fs="/home/kvm/qubes/qubes-packages-mirror-repo/vm-fc32/rpm" exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=net reason=start vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 old-net="?" new-net="52:54:00:10:34:4b" exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=chardev reason=start vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 old-chardev="?" new-chardev="/dev/pts/29" exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=chardev reason=start vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 old-chardev="?" new-chardev="/var/lib/libvirt/qemu/channel/target/domain-8-fc32/org.qemu.guest_agent.0" exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=rng reason=start vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 old-rng="?" new-rng="/dev/urandom" exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=mem reason=start vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 old-mem=0 new-mem=4096000 exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_RESOURCE pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=vcpu reason=start vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 old-vcpu=0 new-vcpu=2 exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST audit[2728]: VIRT_CONTROL pid=2728 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm op=start reason=booted vm="fc32" uuid=12345678-c061-4f3c-877a-8f429d0ac438 vm-pid=76323 exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
22:38:29 HOST kvmchand[73161]: [INFO] daemon/libvirt.c:560: Domain fc32(8, UUID: 12345678-c061-4f3c-877a-8f429d0ac438) changed state (event: 2, detail: 0)!
22:38:29 HOST qubesd[73181]: Setting Qubes DB info for the VM
22:38:29 HOST qubesd[73181]: Starting Qubes DB
22:38:29 HOST kvmchand[73161]: [INFO] daemon/libvirt.c:262: About to attach ivshmem device at /tmp/kvmchand/ivshmem_socket, index 0
22:38:29 HOST libvirtd[2728]: Domain id=8 name='fc32' uuid=12345678-c061-4f3c-877a-8f429d0ac438 is tainted: custom-monitor
22:38:29 HOST kvmchand[73165]: [INFO] daemon/ivshmem.c:534: Got connection from PID: 76323
22:38:29 HOST kvmchand[73161]: [INFO] daemon/libvirt.c:560: Domain fc32(8, UUID: 12345678-c061-4f3c-877a-8f429d0ac438) changed state (event: 3, detail: 0)!
22:38:29 HOST kvmchand[73161]: [WARN] daemon/libvirt.c:634: Unknown lifecycle event 10! Ignoring...
22:38:29 HOST runuser[76341]: pam_unix(runuser:session): session opened for user user by (uid=0)
22:38:29 HOST kvmchand[73169]: [INFO] daemon/localhandler.c:620: Client connected! fd: 13
22:38:29 HOST kvmchand[73159]: [INFO] daemon/connections.c:118: vchan_init called! server_dom: 0, client_dom: 8, port 111, read_min: 4096, write_min: 4096

22:38:29 HOST audit[73161]: ANOM_ABEND auid=4294967295 uid=0 gid=0 ses=4294967295 pid=73161 comm="kvmchand" exe="/usr/sbin/kvmchand" sig=6 res=1
22:38:29 HOST kvmchand[73161]: malloc_consolidate(): invalid chunk size

# --- KVMCHAND COREDUMP (after VM auto-restart) --------------------------------
22:38:29 HOST systemd[1]: Started Process Core Dump (PID 76346/UID 0).
22:38:29 HOST kvmchand[73159]: [ERROR] daemon/daemon.c:164: ERROR: Child unexpectedly died!
22:38:29 HOST libvirtd[2728]: End of file while reading data: Input/output error
22:38:29 HOST systemd[1]: kvmchand-host.service: Main process exited, code=exited, status=1/FAILURE
22:38:29 HOST systemd[1]: kvmchand-host.service: Failed with result 'exit-code'.
22:38:29 HOST systemd-coredump[76347]: Process 73161 (kvmchand) of user 0 dumped core.

                  Stack trace of thread 73164:
                  #0  0x00007fa53a1199e5 __GI_raise (libc.so.6 + 0x3c9e5)
                  #1  0x00007fa53a102895 __GI_abort (libc.so.6 + 0x25895)
                  #2  0x00007fa53a15d857 __libc_message (libc.so.6 + 0x80857)
                  #3  0x00007fa53a164d7c malloc_printerr (libc.so.6 + 0x87d7c)
                  #4  0x00007fa53a165c48 malloc_consolidate (libc.so.6 + 0x88c48)
                  #5  0x00007fa53a167a85 _int_malloc (libc.so.6 + 0x8aa85)
                  #6  0x00007fa53a1695f9 __GI___libc_malloc (libc.so.6 + 0x8c5f9)
                  #7  0x00007fa539b20a10 g_realloc (libglib-2.0.so.0 + 0x58a10)
                  #8  0x00007fa53a59f490 virReallocN (libvirt.so.0 + 0xac490)
                  #9  0x00007fa53a6fd392 virNetMessageEncodeHeader (libvirt.so.0 + 0x20a392)
                  #10 0x00007fa53a714fc9 virNetClientProgramCall (libvirt.so.0 + 0x221fc9)
                  #11 0x00007fa53a738e90 callFull (libvirt.so.0 + 0x245e90)
                  #12 0x00007fa53a73a19e remoteDomainLookupByUUID (libvirt.so.0 + 0x24719e)
                  #13 0x00007fa53a76856e virDomainLookupByUUID (libvirt.so.0 + 0x27556e)
                  #14 0x00007fa53a768777 virDomainLookupByUUIDString (libvirt.so.0 + 0x275777)
                  #15 0x0000000000403dd1 get_pid_by_domain_id (kvmchand + 0x3dd1)
                  #16 0x0000000000408aa2 response_dispatcher_thread (kvmchand + 0x8aa2)
                  #17 0x00007fa53a2b2432 start_thread (libpthread.so.0 + 0x9432)
                  #18 0x00007fa53a1de913 __clone (libc.so.6 + 0x101913)
    
                  Stack trace of thread 73161:
                  #0  0x00007fa53a1d3aaf __GI___poll (libc.so.6 + 0xf6aaf)
                  #1  0x00007fa539b1aaae g_main_context_poll (libglib-2.0.so.0 + 0x52aae)
                  #2  0x00007fa539b1abe3 g_main_context_iteration (libglib-2.0.so.0 + 0x52be3)
                  #3  0x00007fa53a5c72b4 virEventGLibRunOnce (libvirt.so.0 + 0xd42b4)
                  #4  0x00000000004043e4 run_libvirt_loop (kvmchand + 0x43e4)
                  #5  0x0000000000402a6c host_main (kvmchand + 0x2a6c)
                  #6  0x00007fa53a104042 __libc_start_main (libc.so.6 + 0x27042)
                  #7  0x0000000000402c9e _start (kvmchand + 0x2c9e)
    
                  Stack trace of thread 73163:
                  #0  0x00007fa53a2bd6dd __libc_recvmsg (libpthread.so.0 + 0x146dd)
                  #1  0x0000000000408b3e ipcmsg_recv (kvmchand + 0x8b3e)
                  #2  0x0000000000408ccf client_receiver_thread (kvmchand + 0x8ccf)
                  #3  0x00007fa53a2b2432 start_thread (libpthread.so.0 + 0x9432)
                  #4  0x00007fa53a1de913 __clone (libc.so.6 + 0x101913)
    
                  Stack trace of thread 73162:
                  #0  0x00007fa53a2b8e92 futex_wait_cancelable (libpthread.so.0 + 0xfe92)
                  #1  0x00000000004095c6 dispatcher_thread (kvmchand + 0x95c6)
                  #2  0x00007fa53a2b2432 start_thread (libpthread.so.0 + 0x9432)
                  #3  0x00007fa53a1de913 __clone (libc.so.6 + 0x101913)

22:38:29 HOST systemd[1]: systemd-coredump@3-76346-0.service: Succeeded.
22:38:29 HOST audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-coredump@3-76346-0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
22:38:30 HOST abrt-notification[76372]: Process 75117 (qemu-system-x86_64) crashed in qemu_ram_set_idstr.cold()

# systemd auto restart
22:38:30 HOST systemd[1]: kvmchand-host.service: Scheduled restart job, restart counter is at 1.
22:38:30 HOST audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=kvmchand-host comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
22:38:30 HOST audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=kvmchand-host comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
22:38:30 HOST systemd[1]: Stopped KVM vchan daemon.
22:38:30 HOST systemd[1]: Started KVM vchan daemon.
22:38:30 HOST audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=kvmchand-host comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
22:38:30 HOST kvmchand[76383]: [INFO] daemon/localhandler.c:580: guest_main: sd_notify

Backtrace for kvmchand after guest VM was auto started by Qubes. Not sure if it's relivent due to QEMU coredump.

#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1  0x00007fa53a102895 in __GI_abort () at abort.c:79
#2  0x00007fa53a15d857 in __libc_message (action=action@entry=do_abort, 
    fmt=fmt@entry=0x7fa53a26ee1d "%s\n") at ../sysdeps/posix/libc_fatal.c:155
#3  0x00007fa53a164d7c in malloc_printerr (
    str=str@entry=0x7fa53a2710a0 "malloc_consolidate(): invalid chunk size") at malloc.c:5347
#4  0x00007fa53a165c48 in malloc_consolidate (av=av@entry=0x7fa528000020) at malloc.c:4477
#5  0x00007fa53a167a85 in _int_malloc (av=av@entry=0x7fa528000020, bytes=bytes@entry=65540)
    at malloc.c:3699
#6  0x00007fa53a1695f9 in __GI___libc_malloc (bytes=65540) at malloc.c:3066
#7  0x00007fa539b20a10 in g_realloc (mem=0x0, n_bytes=n_bytes@entry=65540) at ../glib/gmem.c:167
#8  0x00007fa539b20bb0 in g_realloc_n (mem=<optimized out>, n_blocks=n_blocks@entry=1, 
    n_block_bytes=n_block_bytes@entry=65540) at ../glib/gmem.c:390
#9  0x00007fa53a59f490 in virReallocN (ptrptr=ptrptr@entry=0x7fa5280027e8, size=size@entry=1, 
    count=count@entry=65540) at ../../src/util/viralloc.c:91
#10 0x00007fa53a6fd392 in virNetMessageEncodeHeader (msg=msg@entry=0x7fa5280027e0)
    at ../../src/rpc/virnetmessage.c:233
#11 0x00007fa53a714fc9 in virNetClientProgramCall (prog=prog@entry=0x238c2b0, 
    client=client@entry=0x238b820, serial=serial@entry=59, proc=proc@entry=24, 
    noutfds=noutfds@entry=0, outfds=outfds@entry=0x0, ninfds=0x0, infds=0x0, 
    args_filter=0x7fa53a7588c0 <xdr_remote_domain_lookup_by_uuid_args>, args=0x7fa5378a8810, 
    ret_filter=0x7fa53a7588e0 <xdr_remote_domain_lookup_by_uuid_ret>, ret=0x7fa5378a8820)
    at ../../src/rpc/virnetclientprogram.c:314
#12 0x00007fa53a738e90 in callFull (conn=<optimized out>, ret=0x7fa5378a8820 "", 
    ret_filter=0x7fa53a7588e0 <xdr_remote_domain_lookup_by_uuid_ret>, 
    args=0x7fa5378a8810 "\022\064Vx\300aO<\207z\217B\235\n\304\070", 
    args_filter=0x7fa53a7588c0 <xdr_remote_domain_lookup_by_uuid_args>, proc_nr=24, fdoutlen=0x0, 
    fdout=0x0, fdinlen=0, fdin=0x0, flags=flags@entry=0, priv=0x238fc00, 
    priv@entry=0x7fa53a7588e0 <xdr_remote_domain_lookup_by_uuid_ret>)
    at ../../src/remote/remote_driver.c:6754
#13 call (priv=priv@entry=0x238fc00, flags=flags@entry=0, proc_nr=proc_nr@entry=24, 
    args_filter=0x7fa53a7588c0 <xdr_remote_domain_lookup_by_uuid_args>, 
    args=args@entry=0x7fa5378a8810 "\022\064Vx\300aO<\207z\217B\235\n\304\070", 
    ret_filter=ret_filter@entry=0x7fa53a7588e0 <xdr_remote_domain_lookup_by_uuid_ret>, 
    ret=0x7fa5378a8820 "", conn=0x238ec00) at ../../src/remote/remote_driver.c:6776
#14 0x00007fa53a73a19e in remoteDomainLookupByUUID (conn=0x238ec00, 
    uuid=0x7fa5378a8900 "\022\064Vx\300aO<\207z\217B\235\n\304\070")
    at ./remote/remote_client_bodies.h:3610
#15 0x00007fa53a76856e in virDomainLookupByUUID (conn=0x238ec00, 
    uuid=0x7fa5378a8900 "\022\064Vx\300aO<\207z\217B\235\n\304\070") at ../../src/libvirt-domain.c:346
#16 0x00007fa53a768777 in virDomainLookupByUUIDString (conn=0x238ec00, uuidstr=0x2390b50 "12345678-c061-4f3c-877a-8f429d0ac438") at ../../src/libvirt-domain.c:389
#17 0x0000000000403dd1 in get_pid_by_domain_id (pid_out=<synthetic pointer>, id=<optimized out>) at daemon/libvirt.c:121
#18 handle_ipc_message (msg=0x7fa530000da0) at daemon/libvirt.c:681
#19 0x0000000000408aa2 in response_dispatcher_thread (rdata_=0x414630 <g_ipc_data+144>) at daemon/ipc.c:392
#20 0x00007fa53a2b2432 in start_thread (arg=<optimized out>) at pthread_create.c:477
#21 0x00007fa53a1de913 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

@shawnanastasio
Copy link
Owner

Sorry for the delay. Have you managed to figure anything out or work around this bug? If not I have an idea I could try to implement that would likely work around it.

@shawnanastasio
Copy link
Owner

I've pushed 09cd616 which modifies the daemon's vchan allocation behavior to be much less conservative with disconnecting vchans (it now tries to reuse them instead). Eventually this may end up wasting memory and will have to be reigned in, but it should result in significantly fewer hotplugs which may prevent this issue with QEMU from being hit.

Let me know if you get the chance to test it.

@nrgaway
Copy link
Contributor Author

nrgaway commented Nov 26, 2020

Thanks. I like the idea of reusing the the vchans since it takes so long to set them up. Looking at the log it seems to take over 10 seconds and that's on a fast CPU with lots of memory.

I have had to focus my efforts catching up on a few things but I will have a chance to resume development by the weekend or shortly aftwerwards and have also set aside most of the month of December to hopefully finish the remainder of tasks to get Qubes completely functional within KVM. I currently don't have the Qubes/KVM environment loaded on my development machine but will test the patch soon.

@shawnanastasio
Copy link
Owner

Glad to hear - let me know if you run into any issues or have any breakthroughs 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants