Skip to content

Code review

Jean-Noël Grad edited this page Oct 19, 2021 · 6 revisions

This page explains how code review is carried out in this repository.

Table of Contents

Automated code review

Automated checks start when a PR is opened. The same checks will be triggered for every push to the branch associated to the PR.

Linting and formatting

The project uses various tools to automatically format the code and detect bugs. We recommend that contributors install and use these tools to avoid unnecessary delays in the treatment of a PR. Installation instructions are provided in Tooling: Linter and code formatter. The Python formatting tool can be zealous at times (e.g. when splitting a list of lists over numerous lines, one item at a time), so do not hesitate to manually tweak the result to make it more readable.

GitLab-CI

The PR is pulled to a GitLab mirror and tested with GitLab-CI. There are three main stages:

  • style stage: checks that the code is correctly formatted and that the Doxygen documentation compiles
  • build stage: compile ESPResSo on various Linux operating systems and run the testsuite
  • additional checks stage: build and check the Sphinx documentation, test the binaries built in the previous stage without a GPU and with a different number of MPI ranks
If the first stage fails, the subsequent stages are skipped and a message is posted to the PR by a bot. This post informs the contributor that the code doesn't conform to our specifications and provides suggestions on how to improve the code. It is common for new contributors to struggle with the bot, usually because they forgot to run pre-commit run --all-files or didn't install the code style tools.

GitHub Actions

Builds ESPResSo and runs the testsuite on a macOS runner.

codecov

Code coverage generated by the runners in the GitLab-CI pipeline are sent to codecov to generate a report. It can take time for the runners to execute the testsuite, therefore the percentages shown in the first 30 min can be ignored, as they are based on incomplete data.

The codecov/patch — 96% of diff hit (target 94%) line is the most important one. When adding a new feature to ESPResSo, the corresponding code must be fully covered by tests.

kodiak

This is the merge bot. Its status indicates what is missing for the PR to be merged.

PR status

There are additional indicators:

  • "This branch is out-of-date with the base branch"
    • This message provides a button to merge the python branch into the PR. Please ignore this message! The merge bot already takes care of merging the python branch into the PR, once it has been approved by the core team. Merging the python branch manually doesn't help and will make it more difficult for you to carry out rebase operations.
  • "This branch has conflicts that must be resolved"
    • This message provides a button to resolve merge conflicts. The button is deactivated if the merge conflict contains deleted files.
  • "This pull request is still a work in progress"
    • This message appears on Draft PRs. Once the PR is marked as Ready for Review, the message disappears. This functionality replaces the old WIP bot.

Human code review

Once the automated checks have passed, members of the ESPResSo team will start reviewing the code. Only a review by a core team member will count towards mergeability. Sometimes, review can be delegated to a non-core team member, in which case the review will appear as a check mark in a a grey circle until a core team member confirms the review.

Style guide

Reviewers will assess the code and make suggestions for improvement. Below is a non-exhaustive list of common style recommendations.

C++:

  • no use of malloc/free, instead use STL containers or new/delete
  • no use of const_cast, except when interfacing with C libraries
  • no use of C-style casts (unsafe: risk of reinterpret_cast or const_cast), instead use static_cast<T>()
  • everything that can be const should be
  • prefer "East const" in new code
  • avoid bare owning pointers in favor of smart pointers
  • avoid bare loops in favor of STL algorithms and Boost Range algorithms
  • avoid MPI primitives in favor of boost::mpi collectives
  • avoid macros
  • don't introduce new global variables; existing ones are being progressively removed
  • avoid using existing global variables, instead pass them by reference
  • avoid mixing signed and unsigned types, e.g. passing an int to a function that takes a std::size_t
  • CamelCase vs. snake_case: see the dedicated page on Naming conventions and folder structure
  • see also the C++ Core Guidelines; although we do not enforce them, they are an excellent source of good coding practices with detailed explanations
Python and Cython:
  • no wildcard imports, e.g. from espressomd.lb import *
  • avoid importing without a namespace, e.g. from espressomd import lb (prefer import espressomd.lb)
  • no use of eval()

Running CI tests locally

Ideally, every commit should be self-contained and pass the complete test suite. This makes it easier (and safer) to cherry-pick or revert a commit at a later point in time.

Assuming that you have installed all dependencies, you can run the CI checks with the following commands:

# run formatter and linter
pre-commit run --all-files
# build the documentation
make sphinx
make tutorials
# check the documentation
../maintainer/CI/dox_warnings.sh
../maintainer/CI/doc_warnings.sh
../maintainer/CI/jupyter_warnings.py
# run the complete test suite
make -j$(nproc) check_unit_tests && make check_python && make check_tutorials && make check_benchmarks && make check_samples

You do not need to run all these tests for every commit, because your changes might only change the outcome of one of the tests. For changes in the Sphinx documentation, it is sufficient to run the doc_warnings.sh check. For small changes to the core or python code, it is usually sufficient to run make -j$(nproc) check_unit_tests and make check_python_skip_long, which skips slow python tests. To run tests with all CPU cores, update the build directory with cmake . -DCTEST_ARGS="-j$(nproc)".

If you need to reproduce the exact build environment used in GitLab-CI, you can download our Docker images and run the tests in a container using the instructions in Tooling: Docker.