Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wrong read speed result for Samsung SSD 970 EVO Plus #48

Open
Vascom opened this issue Mar 7, 2021 · 26 comments
Open

Wrong read speed result for Samsung SSD 970 EVO Plus #48

Vascom opened this issue Mar 7, 2021 · 26 comments
Labels
bug Something isn't working confirmed related to fio

Comments

@Vascom
Copy link

Vascom commented Mar 7, 2021

  • Linux-distro: Fedora 33
  • Desktop Environment (KDE/GNOME etc.): KDE
  • Qt Version: 5.15.2
  • KDiskMark Version: 2.2.0
  • FIO Version: fio-3.21

Description:

Strange speed result for Samsung SSD 970 EVO Plus (1TB and others).
By datasheet and in gnome-disk-utility I have ~3500MB/s sequential read but kdiskmark give me only ~1500MB/s.
And write speed ~2900MB/s. So read speed seems not relevant.

Steps To Reproduce:

Run test.

Can you help solve this bug?

@Vascom Vascom added bug Something isn't working unconfirmed labels Mar 7, 2021
@tim77
Copy link
Contributor

tim77 commented Mar 7, 2021

Same issue. When i've tested first time 5 months ago KDiskMark shows real speed - 3500MB/s:

Снимок экрана от 2020-10-24 09-18-32

Now:
Снимок экрана от 2021-03-06 12-10-00

@JonMagon
Copy link
Owner

JonMagon commented Mar 7, 2021

Try to disable the flush cache option in Settings. This may not work as expected. If it helps, I will disable this option by default.

@Vascom
Copy link
Author

Vascom commented Mar 7, 2021

Disabling flush cache option not helped.

@JonMagon
Copy link
Owner

JonMagon commented Mar 7, 2021

Could you find the release the issue appeared the first time?

@tim77
Copy link
Contributor

tim77 commented Mar 7, 2021

As far i know @Vascom already tried this. I thought as well that this issue appears in some KDiskMark updates but seems like something else causing it.

@JonMagon
Copy link
Owner

JonMagon commented Mar 7, 2021

I can't reproduce it, because I haven't such an SSD, but it's definitely a bug.

@tim77
Copy link
Contributor

tim77 commented Mar 7, 2021

Similar discussion on Reddit and the same disk. Doubt that this could help but a least seems like popular issue.

@Vascom
Copy link
Author

Vascom commented Mar 7, 2021

May be we can give you some debug output?

@JonMagon
Copy link
Owner

JonMagon commented Mar 7, 2021

Can you please check with 2.0.0 and 1.6.2?

mkdir build && cd build
cmake -D CMAKE_BUILD_TYPE=Release ..
cmake --build .

@tim77
Copy link
Contributor

tim77 commented Mar 7, 2021

Tested with 2.0.0 and 1.6.2 and 💯 the same issue. Maybe this the kernel bug or something? Very weird since my first test on the same OS and same SSD was OK.

@JonMagon
Copy link
Owner

JonMagon commented Mar 7, 2021

Then it may be caused by separation loops inside kdiskmark. Could you please try also 1.6.0?

@Vascom
Copy link
Author

Vascom commented Mar 8, 2021

The same with 1.6.0.
Screenshot_20210308_083000

@JonMagon
Copy link
Owner

JonMagon commented Mar 8, 2021

Then it can be really not kdiskmark issue.

@JonMagon JonMagon added the help wanted Extra attention is needed label Mar 9, 2021
@JonMagon
Copy link
Owner

JonMagon commented Mar 9, 2021

What is the output of the commands below?

echo 3 | sudo tee /proc/sys/vm/drop_caches
fio --ioengine=libaio --direct=1 --randrepeat=0 --refill_buffers --end_fsync=1 --filename=test-fio.tmp --name=readjob --size=128M --bs=1m --rw=read --iodepth=8 --loops=5
echo 3 | sudo tee /proc/sys/vm/drop_caches
fio --ioengine=libaio --direct=1 --filename=test-fio.tmp --name=readjob --size=128M --bs=1m --rw=read --iodepth=8 --loops=5

@Vascom
Copy link
Author

Vascom commented Mar 9, 2021

Last login: Tue Mar  9 12:26:53 2021 from 10.9.0.1
[vascom@milkyway ~]$ fio --ioengine=libaio --direct=1 --randrepeat=0 --refill_buffers --end_fsync=1 --filename=test-fio.tmp --name=readjob --size=128M --bs=1m --rw=read --iodepth=8 --loops=5
readjob: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=8
fio-3.21
Starting 1 process
readjob: Laying out IO file (1 file / 128MiB)

readjob: (groupid=0, jobs=1): err= 0: pid=4886: Tue Mar  9 12:45:11 2021
  read: IOPS=1376, BW=1376MiB/s (1443MB/s)(640MiB/465msec)
    slat (usec): min=398, max=2304, avg=715.00, stdev=285.00
    clat (usec): min=313, max=14322, avg=4882.13, stdev=1889.90
     lat (usec): min=890, max=16629, avg=5598.28, stdev=2146.95
    clat percentiles (usec):
     |  1.00th=[ 1029],  5.00th=[ 4015], 10.00th=[ 4113], 20.00th=[ 4228],
     | 30.00th=[ 4293], 40.00th=[ 4359], 50.00th=[ 4490], 60.00th=[ 4621],
     | 70.00th=[ 4752], 80.00th=[ 4948], 90.00th=[ 5080], 95.00th=[11076],
     | 99.00th=[11731], 99.50th=[13042], 99.90th=[14353], 99.95th=[14353],
     | 99.99th=[14353]
  lat (usec)   : 500=0.78%, 1000=0.16%
  lat (msec)   : 2=1.41%, 4=2.50%, 10=88.59%, 20=6.56%
  cpu          : usr=1.29%, sys=81.72%, ctx=6, majf=0, minf=2059
  IO depths    : 1=0.8%, 2=1.6%, 4=3.1%, 8=94.5%, 16=0.0%, 32=0.0%, >=64=0.0%     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=99.2%, 8=0.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=1376MiB/s (1443MB/s), 1376MiB/s-1376MiB/s (1443MB/s-1443MB/s), io=640MiB (671MB), run=465-465msec
[vascom@milkyway ~]$

@tim77
Copy link
Contributor

tim77 commented Mar 9, 2021

readjob: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=8
fio-3.21
Starting 1 process

readjob: (groupid=0, jobs=1): err= 0: pid=279803: Tue Mar  9 11:46:33 2021
  read: IOPS=1649, BW=1649MiB/s (1730MB/s)(640MiB/388msec)
    slat (usec): min=331, max=4245, avg=601.14, stdev=252.80
    clat (usec): min=237, max=10631, avg=4064.16, stdev=1420.99
     lat (usec): min=731, max=14878, avg=4665.47, stdev=1630.65
    clat percentiles (usec):
     |  1.00th=[  775],  5.00th=[ 3458], 10.00th=[ 3556], 20.00th=[ 3589],
     | 30.00th=[ 3621], 40.00th=[ 3654], 50.00th=[ 3687], 60.00th=[ 3720],
     | 70.00th=[ 3752], 80.00th=[ 3851], 90.00th=[ 5669], 95.00th=[ 7767],
     | 99.00th=[ 9241], 99.50th=[ 9503], 99.90th=[10683], 99.95th=[10683],
     | 99.99th=[10683]
  lat (usec)   : 250=0.47%, 500=0.31%, 750=0.16%, 1000=0.62%
  lat (msec)   : 2=1.56%, 4=80.94%, 10=15.62%, 20=0.31%
  cpu          : usr=0.52%, sys=67.96%, ctx=27, majf=0, minf=2059
  IO depths    : 1=0.8%, 2=1.6%, 4=3.1%, 8=94.5%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=99.2%, 8=0.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=1649MiB/s (1730MB/s), 1649MiB/s-1649MiB/s (1730MB/s-1730MB/s), io=640MiB (671MB), run=388-388msec
readjob: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=8
fio-3.21
Starting 1 process

readjob: (groupid=0, jobs=1): err= 0: pid=279979: Tue Mar  9 11:47:23 2021
  read: IOPS=1649, BW=1649MiB/s (1730MB/s)(640MiB/388msec)
    slat (usec): min=335, max=4083, avg=601.38, stdev=233.62
    clat (usec): min=219, max=8998, avg=4076.16, stdev=1331.32
     lat (usec): min=717, max=13082, avg=4677.74, stdev=1517.70
    clat percentiles (usec):
     |  1.00th=[  758],  5.00th=[ 3359], 10.00th=[ 3523], 20.00th=[ 3589],
     | 30.00th=[ 3621], 40.00th=[ 3654], 50.00th=[ 3687], 60.00th=[ 3752],
     | 70.00th=[ 3785], 80.00th=[ 3884], 90.00th=[ 6783], 95.00th=[ 7046],
     | 99.00th=[ 7767], 99.50th=[ 7963], 99.90th=[ 8979], 99.95th=[ 8979],
     | 99.99th=[ 8979]
  lat (usec)   : 250=0.62%, 500=0.16%, 750=0.16%, 1000=0.62%
  lat (msec)   : 2=1.56%, 4=80.00%, 10=16.88%
  cpu          : usr=0.00%, sys=68.73%, ctx=28, majf=0, minf=2060
  IO depths    : 1=0.8%, 2=1.6%, 4=3.1%, 8=94.5%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=99.2%, 8=0.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=1649MiB/s (1730MB/s), 1649MiB/s-1649MiB/s (1730MB/s-1730MB/s), io=640MiB (671MB), run=388-388msec

@JonMagon
Copy link
Owner

JonMagon commented Mar 9, 2021

I would guess that this is the real reading performance.
What if test the speed with dd? Copy the all output.

dd if=/dev/zero of=test-fio.tmp bs=1M count=128
echo 3 | sudo tee /proc/sys/vm/drop_caches
dd if=test-fio.tmp of=/dev/null bs=1M

@tim77
Copy link
Contributor

tim77 commented Mar 9, 2021

echo 3 | sudo tee /proc/sys/vm/drop_caches
dd if=test-fio.tmp of=/dev/null bs=1M
128+0 записей получено
128+0 записей отправлено
134217728 байт (134 MB, 128 MiB) скопирован, 0,0916864 s, 1,5 GB/s
3
128+0 записей получено
128+0 записей отправлено
134217728 байт (134 MB, 128 MiB) скопирован, 0,0393467 s, 3,4 GB/s

@JonMagon
Copy link
Owner

JonMagon commented Mar 9, 2021

No idea, something wrong with fio.

@JonMagon
Copy link
Owner

JonMagon commented Mar 9, 2021

By the way, @tim77 haven't you changed partitions since the results was correct? Maybe alignment is wrong...

@tim77
Copy link
Contributor

tim77 commented Mar 9, 2021

By the way, @tim77 haven't you changed partitions since the results was correct? Maybe alignment is wrong...

Indeed. Partition was changed since first time. BTW i've also wondered about some another issue in BTRFS and i assumed is alignment can be the cause of it or not https://pagure.io/fedora-btrfs/project/issue/36#comment-701576

@Vascom
Copy link
Author

Vascom commented Mar 9, 2021

I just found that it is seems not kdiskmark problem.
At 5.9.16 kernel I see full speed. But on 5.10-5.11.5 still half speed.

@JonMagon
Copy link
Owner

I think you should open an issue in the fio repository, I can't resolve it.

@JonMagon JonMagon removed the help wanted Extra attention is needed label Mar 10, 2021
@tim77
Copy link
Contributor

tim77 commented Mar 10, 2021

In same time gnome-disks shows close to speed which rated by datasheet, but also clearly shown that it's half at beginning:
Снимок экрана от 2021-03-09 22-10-33

But i am definitely not trying to say that this is KDiskMark bug, no. But for end user this could not obvious.

@JonMagon
Copy link
Owner

JonMagon commented Mar 10, 2021

I understand perfectly, but I can't fix it myself. It's one thing if it never worked right, it's quite another thing if it suddenly broke down, and with only one (?) device at that.

@tim77
Copy link
Contributor

tim77 commented Mar 10, 2021

Yep. We will try to file a bug into kernel and fio. Worth to try. At least maybe we can learn something new and maybe this is a feature.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working confirmed related to fio
Projects
None yet
Development

No branches or pull requests

3 participants