Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stable PAGE_HASH_ERRORS_INPAGE BSODS with vioscsi on windows machine #1014

Open
AlexMKX opened this issue Dec 17, 2023 · 6 comments
Open

Stable PAGE_HASH_ERRORS_INPAGE BSODS with vioscsi on windows machine #1014

AlexMKX opened this issue Dec 17, 2023 · 6 comments

Comments

@AlexMKX
Copy link

AlexMKX commented Dec 17, 2023

Describe the bug
There is a Windows 2022 server virtual machine with physically connected 6TB drive and 6TB drive laying on the ZFS HDD partition.
While copying the data from physical drive to the virtual drive the BSOD happens anywhere between 100 and 200 GBs transferred.
It happens with target drive is connected either with Virtio SCSI, Virtio SCSI Single, Virtio block. When target drive is connected with SATA the data are copied perfectly.
As an addition the Optimize-Volume -Retrim for that SCSI connected disk requires about 80GB+ of RAM available. Though trimming same drive connected via SATA causes no problems.

To Reproduce
Steps to reproduce the behaviour:
Copy from one disk to another the relative big amount of data.

Host:

proxmox-ve: 8.1.0 (running kernel: 6.5.11-7-pve)
pve-manager: 8.1.3 (running version: 8.1.3/b46aac3b42da5d15)
proxmox-kernel-helper: 8.1.0
pve-kernel-5.15: 7.4-6
pve-kernel-5.13: 7.1-9
proxmox-kernel-6.5: 6.5.11-7
proxmox-kernel-6.5.11-7-pve-signed: 6.5.11-7
proxmox-kernel-6.2.16-20-pve: 6.2.16-20
proxmox-kernel-6.2: 6.2.16-20
proxmox-kernel-6.2.16-19-pve: 6.2.16-19
proxmox-kernel-6.2.16-18-pve: 6.2.16-18
proxmox-kernel-6.2.16-15-pve: 6.2.16-15
proxmox-kernel-6.2.16-12-pve: 6.2.16-12
pve-kernel-5.15.116-1-pve: 5.15.116-1
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph: 17.2.7-pve1
ceph-fuse: 17.2.7-pve1
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx7
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.1
libpve-access-control: 8.0.7
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.0
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.5
libpve-rs-perl: 0.8.7
libpve-storage-perl: 8.0.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve4
novnc-pve: 1.4.0-3
openvswitch-switch: 3.1.0-2
proxmox-backup-client: 3.1.2-1
proxmox-backup-file-restore: 3.1.2-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.2
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.3
proxmox-widget-toolkit: 4.1.3
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.3
pve-edk2-firmware: 4.2023.08-2
pve-firewall: 5.0.3
pve-firmware: 3.9-1
pve-ha-manager: 4.0.3
pve-i18n: 3.1.4
pve-qemu-kvm: 8.1.2-4
pve-xtermjs: 5.3.0-2
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.2-pve1

VM:

  • Windows version is Windows Server 2022 21H2 Build 20348.2159
  • Which driver has a problem 0.1.215-0.1.240

Additional context

The usual minidump analyzis:

BLACKBOXWINLOGON: 1

CUSTOMER_CRASH_COUNT:  1

PROCESS_NAME:  svchost.exe

PAGE_HASH_ERRORS_DETECTED: 1

STACK_TEXT:  
ffffef0f`78de57e8 fffff805`3f7a74e1     : 00000000`0000001a 00000000`0000003f 00000000`00006e81 00000000`00006e81 : nt!KeBugCheckEx
ffffef0f`78de57f0 fffff805`3f68afc1     : ffffab8f`5d5960c0 ffffffff`ffffffff ffffef0f`78de5a10 ffffef0f`78de5b40 : nt!MiValidatePagefilePageHash+0x241
ffffef0f`78de58d0 fffff805`3f4ba915     : ffffef0f`00000000 ffffef0f`78de5a00 ffffef0f`78de5a28 ffffcee7`00000000 : nt!MiWaitForInPageComplete+0x1d0091
ffffef0f`78de59d0 fffff805`3f4a9b6d     : 00000000`c0033333 00000000`00000001 00007ffb`73063228 00000000`00000000 : nt!MiIssueHardFault+0x1d5
ffffef0f`78de5a80 fffff805`3f630d41     : ffffab8f`5ec73080 ffffab8f`5e5b9080 000001b4`8b3d8730 ffffab8f`00000000 : nt!MmAccessFault+0x35d
ffffef0f`78de5c20 00007ffb`73050be7     : 00000000`00000000 00000000`00000000 00000000`00000000 00000000`00000000 : nt!KiPageFault+0x341
00000082`c3dfece0 00000000`00000000     : 00000000`00000000 00000000`00000000 00000000`00000000 00000000`00000000 : 0x00007ffb`73050be7


SYMBOL_NAME:  PAGE_HASH_ERRORS_INPAGE

MODULE_NAME: Unknown_Module

IMAGE_NAME:  Unknown_Image

STACK_COMMAND:  .cxr; .ecxr ; kb

FAILURE_BUCKET_ID:  PAGE_HASH_ERRORS_0x1a_3f

OS_VERSION:  10.0.20348.859

BUILDLAB_STR:  fe_release_svc_prod2

OSPLATFORM_TYPE:  x64

OSNAME:  Windows 10

FAILURE_ID_HASH:  {6a2d4548-0eec-578d-e8f1-9e2239aa9a00}

Followup:     MachineOwner
---------

 *** Memory manager detected 1 instance(s) of corrupted pagefile page(s) while performing in-page operations.

What tried to solve:

  1. SCSI Single/SCSI switching
  2. Matchine types 440fx and q35
  3. Trim/IoThread/io_uring/native threads/ssd emulation/Discard/Caching options juggling.
  4. Balloon switching on and off
  5. Numa switching on and off
@vrozenfe
Copy link
Collaborator

@AlexMKX
Thank you for reporting this issue.
Can you please post the qemu command line and if possible to share the crash dump file?

Thanks,
Vadim.

@AlexMKX
Copy link
Author

AlexMKX commented Dec 19, 2023

@vrozenfe thanks for taking this into account.
I have sent dump via email.
The command line is :

 /usr/bin/kvm -id 137 -name stor,debug-threads=on -no-shutdown -chardev socket,id=qmp,path=/var/run/qemu-server/137.qmp,server=on,wait=off -mon chardev=qmp,mode=control -chardev socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5 -mon chardev=qmp-event,mode=control -pidfile /var/run/qemu-server/137.pid -daemonize -smbios type=1,uuid=52534755-c968-48b9-b6f1-8012fce39718 
-drive if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE_4M.secboot.fd -drive if=pflash,unit=1,id=drive-efidisk0,format=raw,file=/dev/zvol/infra/vm-137
-disk-0,size=540672 
-smp 4,sockets=2,cores=2,maxcpus=4 -nodefaults -boot menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg 
-vnc unix:/var/run/qemu-server/137.vnc,password=on -cpu host,hv_ipi,hv_relaxed,hv_reset,hv_runtime,hv_spinlocks=0x1fff,hv_stimer,hv_synic,hv_time,hv_vapic,hv_vpindex,+kvm_pv_eoi,+kvm_pv_unhalt -m 82192 
-object memory-backend-ram,id=ram-node0,size=41096M -numa node,nodeid=0,cpus=0-1,memdev=ram-node0 -object memory-backend-ram,id=ram-node1,size=41096M 
-numa node,nodeid=1,cpus=2-3,memdev=ram-node1 -object iothread,id=iothread-virtioscsi0 
-object iothread,id=iothread-virtioscsi1 
-object iothread,id=iothread-virtioscsi3 -readconfig /usr/share/qemu-server/pve-q35-4.0.cfg 
-device vmgenid,guid=96f37124-bc3c-4142-a49d-7d280cacdc8b -device usb-tablet,id=tablet,bus=ehci.0,port=1 
-chardev socket,id=tpmchar,path=/var/run/qemu-server/137.swtpm -tpmdev emulator,id=tpmdev,chardev=tpmchar
 -device tpm-tis,tpmdev=tpmdev -device VGA,id=vga,bus=pcie.0,addr=0x1 
-chardev socket,path=/var/run/qemu-server/137.qga,server=on,wait=off,id=qga0 -device virtio-serial,id=qga0,bus=pci.0,addr=0x8 -device virtserialport,chardev=qga0,name=org.qemu.guest_agent.0 
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on -iscsi initiator-name=iqn.1993-08.org.debian:01:56e748a7f695 -drive if=none,id=drive-ide0,media=cdrom,aio=io_uring 
-device ide-cd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0,bootindex=102 
-device virtio-scsi-pci,id=virtioscsi0,bus=pci.3,addr=0x1,iothread=iothread-virtioscsi0 
-drive file=/dev/zvol/storage/vm-137-disk-0,if=none,id=drive-scsi0,discard=on,throttling.bps-write=10485760,throttling.bps-write-max=104857600,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap 
-device scsi-hd,bus=virtioscsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0 
-device virtio-scsi-pci,id=virtioscsi1,bus=pci.3,addr=0x2,iothread=iothread-virtioscsi1 
-drive file=/dev/zvol/infra/vm-137-disk-3,if=none,id=drive-scsi1,cache=writeback,discard=on,format=raw,aio=io_uring,detect-zeroes=unmap -device scsi-hd,bus=virtioscsi1.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi1,id=scsi1,rotation_rate=1,bootindex=100 
-device virtio-scsi-pci,id=virtioscsi3,bus=pci.3,addr=0x4,iothread=iothread-virtioscsi3 
-drive file=/dev/disk/by-id/wwn-0x50014ee2139969e3,if=none,id=drive-scsi3,format=raw,cache=none,aio=io_uring,detect-zeroes=on -device scsi-hd,bus=virtioscsi3.0,channel=0,scsi-id=0,lun=3,drive=drive-scsi3,id=scsi3 
-netdev type=tap,id=net0,ifname=tap137i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown 
-device e1000,mac=32:1E:10:7E:AE:E4,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=101 -rtc driftfix=slew,base=localtime -machine hpet=off,type=pc-q35-8.1+pve0 
-global kvm-pit.lost_tick_policy=discard

This is current command line. Kindly note that during that BSOD there was no write bandwidth capping for the scsi0

I'm running the ECC Reg Supermicro X8DTU machine with another VMs, so I believe this is not caused by the host-memory errors.

@AlexMKX
Copy link
Author

AlexMKX commented Dec 21, 2023

Some updates on the issue.
During the issue-related operations, the virtual disk resided on the ZFS encrypted dataaset with checksums, without compression and with dedup turned on. This dataset resided on the spinning disk.

I've removed that dataset and passed this disk directly to the VM, created partition and encrypted it with VeraCrypt. At this moment ~1.2 TB has copied without any problems.
The drive passed as Virtio SCSI Single.

@AlexMKX
Copy link
Author

AlexMKX commented Dec 23, 2023

Hello. Kindly find updates below.
Copying 2TB to the storage on ZFS without dedup went flawlessly.
Copying same amount of data to the storage on ZFS with dedup=skein,verify ended up with Device Not Ready (no BSOD yet though).
There are Reset to device, \Device\RaidPort1, was issued. in the event logs.
Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\disk:TimeOutValue = 60
Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\disk:IoTimeoutValue = 60
Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\vioscsi\Parameters:IoTimeoutValue = 90

I'm going to cap the throughtput for the target device at qemu, to spread and lower the write load to ZFS.

@AlexMKX
Copy link
Author

AlexMKX commented Dec 24, 2023

20mbps bandwidth cap for volume with dedup has eneded up with CRITICAL_PROCESS_DIED for svcshost after 800gb copy
It seems, the problem is most likely caused by the underlying storage latency issues.

At 15MBps bandwidth cap got device not ready error.

@AlexMKX
Copy link
Author

AlexMKX commented Dec 25, 2023

Hello. I have reduced RAM for the VM to 4096, now it fails faster. It seems, I've done with differential tests and waiting for the inputs from you :)
With 2048 RAM I'm getting stable BSODs at ~100GB copied.
Bsods are different - not only hash mismatch, they are just random.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants