Skip to content
This repository has been archived by the owner on Aug 5, 2022. It is now read-only.

Run benchmark

Daisy Deng edited this page Apr 16, 2019 · 9 revisions

Prerequisites

  1. To achieve best performance please reference to Recommendations to achieve best performance;
  2. Build intelcaffe by using scripts 'scripts/build_intelcaffe.sh'.
  • If you want to do single node performance benchmark, you can use below script to build intelcaffe on single node mode:
# scripts/build_intelcaffe.sh --compiler icc/gcc 
  • otherwise if you want to do multinodes performance benchmark, you can use below script to build intelcaffe on multinodes mode:
# scripts/build_intelcaffe.sh --multinode --compiler icc/gcc --layer_timing

How to run benchmark

  1. You can use below script to issue benchmarking test; if you don't specify the config file, we provide a default config file under 'scripts/benchmark_config_default.json' which will run inference latency test for different models:
# Single socket inference:
# CAFFE_INFERENCE_MEM_OPT=1 scripts/run_benchmark.py -c/--configfile your_config_file.json
# Training:
# scripts/run_benchmark.py -c/--configfile your_config_file.json
# [scripts/benchmark_config_default.json]
  1. After your running is done, you can check your final results or failure logs under 'result-benchmark-YearMonthDayHourMinuteSecond.log'; for more detailed logs you can check on 'result-platform-topology-YearMonthDayHourMinuteSecond.log'.

How to config your benchmark config file

You need to prepare your own benchmark config file for benchmarking, and you can find a template under 'scripts/benchmark_config_template.json';

- Below are detailed fields descriptions on benchmark config file:

    - "topology" : "the topology you want to benchmark, for example alexnet/googlenet/googlenet_v2/resnet_50/all_train/all_inf, all_train and all_inf will run all the benchmarks defined in the json file",
    - "hostfile" : "/your/hostfile",
    - "network" : "currently support tcp or opa",
    - "netmask" : "name of your ethernet card interface like eth0; this is necessary while running on tcp network",
    - "dummy_data_use" : true, `set as true if you want to use dummy data, else set as false to use actual datasets and you need to specify the dataset path within the model protocol file; default is true to use dummy data for benchmarking;`
    - "test_mode" : "scal_test/train_throughput/inf_throughput/inf_latency",
    - "inf_instances" : "1, The number of instance to run in paralle"
    - "num_omp_threads" : "20, the num of thread per instance"
    - "caffe_bin" : "",
    - "engine" : "choose CAFFE, MKL2017 or MKLDNN"
    - "train_perf_batch_size_table" : {}, `this is batch size table which contains batch sizes you want to use on various topology and platform combinations for training throughput test; for default, we are using best known batch size from experiences obtained within our internal tests.`
    - "infernece_perf_batch_size_table" : {}, `this is batch size table which contains batch sizes you want to use on various topology and platform combinations for inference throughput test when test_mode is "inf_throughput"; for default, we are using best known batch size from experiences obtained within our internal tests. The inf_latency test will always use BS1`
    - "scal_batch_size_table" : {}, `this is batch size table which contains batch sizes you want to use on various topology and platform combinations for multinode scability test; for default, we are using best known batch size from experiences obtained within our internal tests.`

Scripts are verified on the OSes: CentOS 7.4.

Clone this wiki locally