Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why the results both on CPU and GPU are same? #1180

Open
KingICCrab opened this issue Mar 19, 2024 · 2 comments
Open

Why the results both on CPU and GPU are same? #1180

KingICCrab opened this issue Mar 19, 2024 · 2 comments

Comments

@KingICCrab
Copy link

When I run
cmr "run mlperf inference generate-run-cmds _submission" --quiet --submitter="MLCommons" --hw_name=default --model=bert-99 --implementation=reference --backend=onnxruntime --device=cuda --scenario=Offline --adr.compiler.tags=gcc --target_qps=1 --category=edge --division=open
default-reference-gpu-onnxruntime-v1.17.1-default_config
+---------+----------+----------+--------+-----------------+---------------------------------+
| Model | Scenario | Accuracy | QPS | Latency (in ms) | Power Efficiency (in samples/J) |
+---------+----------+----------+--------+-----------------+---------------------------------+
| bert-99 | Offline | X () | 44.157 | - | |
+---------+----------+----------+--------+-----------------+---------------------------------+

when I run
cmr "run mlperf inference generate-run-cmds _submission" --quiet --submitter="MLCommons" --hw_name=default --model=bert-99 --implementation=reference --backend=onnxruntime --device=cpu --scenario=Offline --adr.compiler.tags=gcc --target_qps=1 --category=edge --division=open
default-reference-gpu-onnxruntime-v1.17.1-default_config
+---------+----------+----------+--------+-----------------+---------------------------------+
| Model | Scenario | Accuracy | QPS | Latency (in ms) | Power Efficiency (in samples/J) |
+---------+----------+----------+--------+-----------------+---------------------------------+
| bert-99 | Offline | X () | 44.157 | - | |
+---------+----------+----------+--------+-----------------+---------------------------------+
I change the "--device"
Why the results is same?

@gfursin
Copy link
Contributor

gfursin commented Mar 19, 2024

There may be several potential issues with CUDA run - if CUDA is not installed properly or fails on your system, ONNX may revert to CPU. I didn't see that cases before but assume that this is the case. Also, we usually do not mix CPU and CUDA installation so you need to clean CM cache in between such runs:

cm rm cache -f

Maybe you can clean the cache and rerun above command with --device=cuda and submit the full log?
We may need to handle better such cases ... Thanks a lot again for your feedback - that helps us improve CM for everyone!

@KingICCrab
Copy link
Author

KingICCrab commented Mar 20, 2024

After running the command(cm rm cache -f), I run cmr "run mlperf inference generate-run-cmds _submission" --quiet --submitter="MLCommons" --hw_name=default --model=bert-99 --implementation=reference --backend=onnxruntime --device=cuda --scenario=Offline --adr.compiler.tags=gcc --target_qps=1 --category=edge --division=open
the errors are following.

GPU Device ID: 0
GPU Name: NVIDIA GeForce RTX 4070 Laptop GPU
GPU compute capability: 8.9
CUDA driver version: 12.2
CUDA runtime version: 12.4
Global memory: 8585216000
Max clock rate: 1980.000000 MHz
Total amount of shared memory per block: 49152
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 1536
Maximum number of threads per block: 1024
Max dimension size of a thread block X: 1024
Max dimension size of a thread block Y: 1024
Max dimension size of a thread block Z: 64
Max dimension size of a grid size X: 2147483647
Max dimension size of a grid size Y: 65535
Max dimension size of a grid size Z: 65535

        Detected version: 24.0
         ! cd /home/zhaohc/CM/repos/local/cache/07081a5ef7a04a4a
         ! call /home/zhaohc/CM/repos/mlcommons@ck/cm-mlops/script/get-generic-python-lib/run.sh from tmp-run.sh
         ! call "postprocess" from /home/zhaohc/CM/repos/mlcommons@ck/cm-mlops/script/get-generic-python-lib/customize.py
           ! cd /home/zhaohc/CM/repos/local/cache/c7e571cb13e549e1
           ! call /home/zhaohc/CM/repos/mlcommons@ck/cm-mlops/script/get-generic-python-lib/run.sh from tmp-run.sh
           ! call "detect_version" from /home/zhaohc/CM/repos/mlcommons@ck/cm-mlops/script/get-generic-python-lib/customize.py
      Detected version: 5.1
       ! cd /home/zhaohc/CM/repos/local/cache/c7e571cb13e549e1
       ! call /home/zhaohc/CM/repos/mlcommons@ck/cm-mlops/script/get-generic-python-lib/run.sh from tmp-run.sh
       ! call "postprocess" from /home/zhaohc/CM/repos/mlcommons@ck/cm-mlops/script/get-generic-python-lib/customize.py

Generating SUT description file for default-onnxruntime
HW description file for default not found. Copying from default!!!
! call "postprocess" from /home/zhaohc/CM/repos/mlcommons@ck/cm-mlops/script/get-mlperf-inference-sut-description/customize.py

/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/logging.cc: In member function ‘void mlperf::logging::AsyncLog::RecordTokenCompletion(uint64_t, std::chrono::_V2::system_clock::time_point, mlperf::QuerySampleLatency)’:
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/logging.cc:483:61: warning: unused parameter ‘completion_time’ [-Wunused-parameter]
483 | PerfClock::time_point completion_time,
| ~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/logging.cc: In member function ‘std::vector mlperf::logging::AsyncLog::GetTokenLatencies(size_t)’:
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/logging.cc:601:68: warning: unused parameter ‘expected_count’ [-Wunused-parameter]
601 | std::vector AsyncLog::GetTokenLatencies(size_t expected_count) {
| ~~~~~~~^~~~~~~~~~~~~~
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/logging.cc: In member function ‘std::vector mlperf::logging::AsyncLog::GetTimePerOutputToken(size_t)’:
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/logging.cc:607:72: warning: unused parameter ‘expected_count’ [-Wunused-parameter]
607 | std::vector AsyncLog::GetTimePerOutputToken(size_t expected_count){
| ~~~~~~~^~~~~~~~~~~~~~
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/logging.cc: In member function ‘std::vector mlperf::logging::AsyncLog::GetTokensPerSample(size_t)’:
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/logging.cc:613:58: warning: unused parameter ‘expected_count’ [-Wunused-parameter]
613 | std::vector<int64_t> AsyncLog::GetTokensPerSample(size_t expected_count) {
| ~~~~~~~^~~~~~~~~~~~~~
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc: In instantiation of ‘void mlperf::loadgen::RunPerformanceMode(mlperf::SystemUnderTest*, mlperf::QuerySampleLibrary*, const mlperf::loadgen::TestSettingsInternal&, mlperf::loadgen::SequenceGen*) [with mlperf::TestScenario scenario = mlperf::TestScenario::SingleStream]’:
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:1132:61: required from ‘static mlperf::loadgen::RunFunctions mlperf::loadgen::RunFunctions::GetCompileTime() [with mlperf::TestScenario compile_time_scenario = mlperf::TestScenario::SingleStream]’
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:1138:58: required from here
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:918:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::first_token_latency_min’ [-Wmissing-field-initializers]
918 | PerformanceSummary perf_summary{sut->Name(), settings, std::move(pr)};
| ^~~~~~~~~~~~
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:918:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::first_token_latency_max’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:918:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::first_token_latency_mean’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:918:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::time_per_output_token_min’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:918:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::time_per_output_token_max’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:918:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::time_per_output_token_mean’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:918:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::first_token_latency_min’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:918:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::first_token_latency_max’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:918:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::first_token_latency_mean’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:918:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::time_per_output_token_min’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:918:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::time_per_output_token_max’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:918:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::time_per_output_token_mean’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc: In instantiation of ‘void mlperf::loadgen::FindPeakPerformanceMode(mlperf::SystemUnderTest*, mlperf::QuerySampleLibrary*, const mlperf::loadgen::TestSettingsInternal&, mlperf::loadgen::SequenceGen*) [with mlperf::TestScenario scenario = mlperf::TestScenario::SingleStream]’:
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:1132:61: required from ‘static mlperf::loadgen::RunFunctions mlperf::loadgen::RunFunctions::GetCompileTime() [with mlperf::TestScenario compile_time_scenario = mlperf::TestScenario::SingleStream]’
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:1138:58: required from here
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:988:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::first_token_latency_min’ [-Wmissing-field-initializers]
988 | PerformanceSummary base_perf_summary{sut->Name(), base_settings,
| ^~~~~~~~~~~~~~~~~
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:988:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::first_token_latency_max’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:988:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::first_token_latency_mean’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:988:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::time_per_output_token_min’ [-Wmissing-field-initializers]

SUT: default-reference-gpu-onnxruntime-v1.17.1-default_config, model: bert-99, scenario: Offline, target_qps updated as 44.1568
New config stored in /home/zhaohc/CM/repos/local/cache/9039508f728b4d64/configs/default/reference-implementation/gpu-device/onnxruntime-framework/framework-version-v1.17.1/default_config-config.yaml
[2024-03-20 20:08:05,501 log_parser.py:50 INFO] Sucessfully loaded MLPerf log from /home/zhaohc/test_results/default-reference-gpu-onnxruntime-v1.17.1-default_config/bert-99/offline/performance/run_1/mlperf_log_detail.txt.
[2024-03-20 20:08:05,506 log_parser.py:50 INFO] Sucessfully loaded MLPerf log from /home/zhaohc/test_results/default-reference-gpu-onnxruntime-v1.17.1-default_config/bert-99/offline/performance/run_1/mlperf_log_detail.txt.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants