Skip to content
Jeremy Raw edited this page Sep 18, 2019 · 13 revisions

Tests

The VisionEval model system and design of the framework software incorporates a number of features that facilitate automated testing to assure that modules will work within the system. Perhaps the most significant feature is the 'data typing' system that requires modules to completely specify all data that the module handles in some fashion: as model inputs, as data fetched from the common datastore, and as data that is output to be saved in the common datastore. These data specifications establish 'contracts' that assure data consistency and the framework software includes a number of functions that use the specifications to check consistency. These functions help to ensure that modules will work properly together.

Following are some of the checks that are done:

  • Input data are checked against input data specifications;
  • Input data geography is checked against specified model geography;
  • Module specifications are checked for correctness;
  • Module specifications for data to be fetched from the datastore are checked against the specifications for the data stored in the datastore; and,
  • Module outputs are checked for consistency with the module specifications for those outputs.

Because of these checks and related checks, it is possible to automate a number of tests that allow a module developer to test that their module (when supplied with suitable test inputs) will work in the VisionEval model system. These checks are carried out by the 'testModule' function. In addition, this test can be run automatically when the package is built, enabling the developer to check at built time that the module works and the correctness to be checked as when the package is added to the VisionEval repository.

The data specification system also enables a model to be thoroughly checked before it is run. This will greatly reduce, if not eliminate, model runtime crashes. This is done by the 'initializeModel' function. The function checks:

  • Whether all module packages are installed, those packages contain the identified modules, and all module specifications are correct (have required attributes);
  • Specified model geography and consistency of model inputs with the geography;
  • Whether model inputs are consistent with specifications; and,
  • Whether every module when it is called will have the data it needs to run (either in specified input files or the datastore) and whether the attributes of those data are consistent with the specifications for the module.

These checks that are built into the framework software make it possible to implement automated checking of VisionEval module packages and VisionEval model whenever a module package and/or model is added to or modified in the VisionEval repository.

UI Testing

VEGUI is an application that allows a user to run various VisionEval models. VEGUI is tested using shinytest to ensure its functionality and its usability. Multiple test scripts are written to test different functionalities. Following is a list of tests currently implemented:

  • open_test: Checks if VEGUI opens and closes without any glitch,
  • run_verpat_model_test: Checks if VEGUI opens, runs a VERPAT model, collects all the results, and closes properly.

A test requires a set of expected results (images or json) to make comparisons and draw conclusions. A test is successful if it finishes. A failed test will prompt the user with the differences compared to the expected results. The main test script run_vegui_test serves has a host and makes call to all the aforementioned tests.

The following steps describe a process to create a new test and/or the expected results:

  • Use test_template as a template to write your own script;
  • Set the parameter createExpectedResults in the main test script run_vegui_test to TRUE (this will allow a user to create a new set of expected results);
  • Source the new test script in run_vegui_test;
  • Run run_vegui_test to save the expected results;
  • Set the parameter createExpectedResults in the main test script run_vegui_test to FALSE;
  • Run run_vegui_test to ensure that the test finishes successfully.

Test System

TravisCI services are used to automatically test all modules and models to assure that they work properly. Here are a few details on the test system:

  • Tests automatically run with every commit and the pass/fail status is at the bottom of the README as rendered by GitHub

  • The current Travis build and test script is here, which includes all the steps to install, build, and run all the VE resources. To add a new module or model, add an additional line to the env property for the module or model. This will result in an additional parallel build process using Travis' build matrix. Example environment variables are:

    • FOLDER=sources/modules/VELandUse (folder location relative to the root; note framework, modules, and models are fixed)
    • SCRIPT=tests/scripts/test.R (test script to run, which is required by the VisionEval specification)
    • TYPE=module (either module or model)
    • DEPENDS=VE2001NHTS,VESimHouseholds (list of packages required to run this module or model)
  • The build logs, which are helpful when errors happen, are here

  • Mysterious failures sometimes happen. If the log reports a substantive error (e.g. missing documentation, missing package), the problem probably lies in a damaged cache. See the next item, and the following that describes how to delete the cache.

  • VE testing with Travis makes use of caching due to Travis' job time limit. In order for all the jobs to complete within the time limit, installation of the package dependencies need to be done from the cache (rather than from source). The caches are built in the first stage of the tests, and then modules are built and tested using the cached contents.

  • To delete a Travis cache (if you suspect it might be causing problems), go to the failed build entry via your Dashboard or list of recent build attempts and open the failed build. Look for the "More Options" menu in the upper right, choose "Caches" and find the cache associated with the branch whose build failed. Hit the "garbage can" icon on the right to delete it. Then you can return to the build and push the "Restart Build" button.

Clone this wiki locally