New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The value of 'micros/op' is not equal to 1,000,000 divided by the value of 'ops/sec' #12544
Comments
Did you set a value via --threads ? I can't reproduce this using a much smaller value for --num and RocksDB 7.8.3 with 1M / 128350 = 7.791 Also works fine using latest RocksDB as of ... My command line: |
I suppose you are using 32 thread. |
Yes, I used 32 threads. |
Yes, I set it to 32 threads. The RocksDB version I used is v7.2.2. ./db_bench --key_size=20 --value_size=800 --target_file_size_base=134217728 --write_buffer_size=2147483648 --max_bytes_for_level_base=4294967296 --max_bytes_for_level_multiplier=4 --max_background_jobs=8 --max_background_compactions=8 --use_direct_io_for_flush_and_compaction --stats_dump_period_sec=15 --delete_obsolete_files_period_micros=30000000 --statistics --benchmarks=readwhilewriting,stats --use_existing_db --histogram --threads=32 --num=3300000000 --duration=1800 |
I am running db_bench to do a 'readwhilewriting' benchmark on an SSD drive. The statistic results show that the value of 'micros/op' is not equal to 1,000,000 divided by the value of 'ops/sec'.
Below is a snippet of the results.
Initializing RocksDB Options from the specified file
Initializing RocksDB Options from command-line flags
Keys: 20 bytes each (+ 0 bytes user-defined timestamp)
Values: 800 bytes each (400 bytes after compression)
Entries: 3300000000
Prefix: 0 bytes
Keys per prefix: 0
RawSize: 2580642.7 MB (estimated)
FileSize: 1321792.6 MB (estimated)
Write rate: 0 bytes/second
Read rate: 0 ops/second
Compression: Snappy
Compression sampling rate: 0
Memtablerep: SkipListFactory
Perf Level: 1
DB path: [/output/f2fs/nvme3n1_f2fs/eval]
readwhilewriting : 1023.937 micros/op 31248 ops/sec; 21.1 MB/s (1520593 of 1758999 found)
Expected behavior
xxx micros/op = 1 000 000 / xxx ops/sec
Actual behavior
1023.937 != 1 000 000 / 31248
Steps to reproduce the behavior
Below is the command used to do the benchmark
./db_bench --key_size=20 --value_size=800 --target_file_size_base=134217728 --write_buffer_size=2147483648 --max_bytes_for_level_base=4294967296 --max_bytes_for_level_multiplier=4 --max_background_jobs=8 --max_background_compactions=8 --use_direct_io_for_flush_and_compaction --stats_dump_period_sec=15 --delete_obsolete_files_period_micros=30000000 --statistics --benchmarks=fillrandom,stats --num=3300000000
The text was updated successfully, but these errors were encountered: