Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The value of 'micros/op' is not equal to 1,000,000 divided by the value of 'ops/sec' #12544

Closed
bpan2020 opened this issue Apr 16, 2024 · 4 comments

Comments

@bpan2020
Copy link

I am running db_bench to do a 'readwhilewriting' benchmark on an SSD drive. The statistic results show that the value of 'micros/op' is not equal to 1,000,000 divided by the value of 'ops/sec'.

Below is a snippet of the results.

Initializing RocksDB Options from the specified file
Initializing RocksDB Options from command-line flags
Keys: 20 bytes each (+ 0 bytes user-defined timestamp)
Values: 800 bytes each (400 bytes after compression)
Entries: 3300000000
Prefix: 0 bytes
Keys per prefix: 0
RawSize: 2580642.7 MB (estimated)
FileSize: 1321792.6 MB (estimated)
Write rate: 0 bytes/second
Read rate: 0 ops/second
Compression: Snappy
Compression sampling rate: 0
Memtablerep: SkipListFactory
Perf Level: 1

DB path: [/output/f2fs/nvme3n1_f2fs/eval]
readwhilewriting : 1023.937 micros/op 31248 ops/sec; 21.1 MB/s (1520593 of 1758999 found)

Expected behavior

xxx micros/op = 1 000 000 / xxx ops/sec

Actual behavior

1023.937 != 1 000 000 / 31248

Steps to reproduce the behavior

Below is the command used to do the benchmark
./db_bench --key_size=20 --value_size=800 --target_file_size_base=134217728 --write_buffer_size=2147483648 --max_bytes_for_level_base=4294967296 --max_bytes_for_level_multiplier=4 --max_background_jobs=8 --max_background_compactions=8 --use_direct_io_for_flush_and_compaction --stats_dump_period_sec=15 --delete_obsolete_files_period_micros=30000000 --statistics --benchmarks=fillrandom,stats --num=3300000000

@mdcallag
Copy link
Contributor

Did you set a value via --threads ?
What version or commit of RocksDB are you using?

I can't reproduce this using a much smaller value for --num and RocksDB 7.8.3 with 1M / 128350 = 7.791
fillrandom : 7.791 micros/op 128350 ops/sec 7.791 seconds 1000000 operations; 100.4 MB/s

Also works fine using latest RocksDB as of ...
commit d8fb849 (HEAD -> main, origin/main, origin/HEAD)
Author: anand76 anand1976@users.noreply.github.com
Date: Fri Apr 19 19:13:31 2024 -0700

My command line:
./db_bench --key_size=20 --value_size=800 --target_file_size_base=134217728 --write_buffer_size=2147483648 --max_bytes_for_level_base=4294967296 --max_bytes_for_level_multiplier=4 --max_background_jobs=8 --max_background_compactions=2 --use_direct_io_for_flush_and_compaction --stats_dump_period_sec=15 --delete_obsolete_files_period_micros=30000000 --statistics --benchmarks=fillrandom,stats --num=1000000 --db=/data/m/rx

@bernard035
Copy link

I suppose you are using 32 thread.
'micros/op' is actually micros per op per thread.
31248*1023.937/1e6 is about 32, so I think you're using 32thread. Right?

@bpan2020
Copy link
Author

I suppose you are using 32 thread. 'micros/op' is actually micros per op per thread. 31248*1023.937/1e6 is about 32, so I think you're using 32thread. Right?

Yes, I used 32 threads.

@bpan2020
Copy link
Author

Did you set a value via --threads ? What version or commit of RocksDB are you using?

I can't reproduce this using a much smaller value for --num and RocksDB 7.8.3 with 1M / 128350 = 7.791 fillrandom : 7.791 micros/op 128350 ops/sec 7.791 seconds 1000000 operations; 100.4 MB/s

Also works fine using latest RocksDB as of ... commit d8fb849 (HEAD -> main, origin/main, origin/HEAD) Author: anand76 anand1976@users.noreply.github.com Date: Fri Apr 19 19:13:31 2024 -0700

My command line: ./db_bench --key_size=20 --value_size=800 --target_file_size_base=134217728 --write_buffer_size=2147483648 --max_bytes_for_level_base=4294967296 --max_bytes_for_level_multiplier=4 --max_background_jobs=8 --max_background_compactions=2 --use_direct_io_for_flush_and_compaction --stats_dump_period_sec=15 --delete_obsolete_files_period_micros=30000000 --statistics --benchmarks=fillrandom,stats --num=1000000 --db=/data/m/rx

Yes, I set it to 32 threads. The RocksDB version I used is v7.2.2.
Oh, sorry, I gave the wrong command. Here is the correct one.

./db_bench --key_size=20 --value_size=800 --target_file_size_base=134217728 --write_buffer_size=2147483648 --max_bytes_for_level_base=4294967296 --max_bytes_for_level_multiplier=4 --max_background_jobs=8 --max_background_compactions=8 --use_direct_io_for_flush_and_compaction --stats_dump_period_sec=15 --delete_obsolete_files_period_micros=30000000 --statistics --benchmarks=readwhilewriting,stats --use_existing_db --histogram --threads=32 --num=3300000000 --duration=1800

@bpan2020 bpan2020 closed this as completed May 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants