Skip to content

IncQueryLabs/magicdraw-viatra-benchmark

Repository files navigation

The MagicDraw VIATRA Query performance benchmark

This benchmark is intended to showcase the performance of the VIATRA Query engine when run directly on MagicDraw SysML models, through the EMF interface.

Getting Started

Cloning this repository

Use the green "Clone or download" link on the GitHub web to clone the repository and get the source.

From now on, the folder that contains this README.md file on your computer is called WORKSPACE_BENCHMARK

export WORKSPACE_BENCHMARK`=c:/git/magicdraw-viatra-benchmark

Get the instance models

You need to get a set of instance models (mdzip files) that the benchmark runs on.

From now on, the folder that contains models on your computer is called MODEL_LOCATION

export MODEL_LOCATION`=c:/models-tmt/

You can download the models from this link: http://static.incquerylabs.com/projects/magicdraw/benchmark/models/models-tmt.zip

Running the benchmark

After you clone this repository, you can run the benchmark by executing the following ./benchmark.sh in the WORKSPACE_BENCHMARK folder:

In addition, you need to have Gradle and Java installed to run the build. The results of the benchmark will be available in the com.incquerylabs.benchmark.test/results folder.

Create diagrams from results

Reporting is done using the MONDO-SAM tool, we are using the 0.1-maintenance branch. MONDO-SAM requires R and Python 3 to be installed and available in your PATH. You can find information on how to setup MONDO-SAM on their repository.

The following script downloads MONDO-SAM, converts the results from com.incquerylabs.benchmark.test/results/<query> to benchmark/results then generates the diagrams to benchmark/diagrams:

You need to set the WORKSPACE_BENCHMARK environment variable to the repository root.

Configuring the benchmark

By default, the benchmark runs on a very small subset of queries, with the smallest model and only once per measurement. This behaviour is configured by four variables and affects how com.incquerylabs.benchmark.test/run.sh behaves:

The name of the variables and their possible values are the following:

  • BENCHMARK_ENGINES: Which query engine implementations to measure
    • Default (all possible values): RETE, LOCAL_SEARCH, LOCAL_SEARCH_HINTS-CONDITION_FIRST, LOCAL_SEARCH_HINTS-TC_FIRST, HYBRID
  • BENCHMARK_QUERIES: Which queries to measure - all is a special value to run the queries at once in the same MagicDraw instance
    • Default: transitiveSubstatesWithCheck3
    • Possible values: all, blocksOrRequirementsOrConstraints, alphabeticalDependencies, circularDependencies, loopTransitionWithTriggerEffectEventNoGuard, stateWithMostSubstates, transitiveSubstatesWithCheck3, allBenchMarkedQueries
  • BENCHMARK_QUERIES_EXCLUDE: Which queries to exclude from measure
    • Default: ``
    • Possible values: blocksOrRequirementsOrConstraints, alphabeticalDependencies, circularDependencies, loopTransitionWithTriggerEffectEventNoGuard, stateWithMostSubstates, transitiveSubstatesWithCheck3, allBenchMarkedQueries
  • BENCHMARK_SIZES: Which models to measure
    • Default: 5000
    • Possible values: 300000, 540000, 780000, 1040000, 1200000
  • BENCHMARK_RUNS: How many times the measurements should be run for each configuration
    • Default: 1

You can either set this in environment variables before running the benchmark or modify the run.sh script directly.

Repository structure

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages