Skip to content

Latest commit

 

History

History
649 lines (443 loc) · 24.6 KB

CONTRIBUTING.md

File metadata and controls

649 lines (443 loc) · 24.6 KB

Raiden Development Guide

Welcome! This guide serves as the guideline to contributing to the Raiden Network codebase. It's here to help you understand what development practises we use here and what are the requirements for a Pull Request to be opened against Raiden.

Contributing

There are two ways you can contribute to the development. You can either open an Issue or if you have programming abilities open a Pull Request.

Creating an Issue

If you experience a problem while using Raiden or want to request a feature then you should open an issue against the repository. All issues should contain:

For Feature Requests:

  • A description of what you would like to see implemented.
  • An explanation of why you believe this would make a good addition to Raiden.

For Bugs:

  • A short description of the problem.
  • Detailed description of your system, raiden version, geth version, solidity version e.t.c.
  • What was the exact unexpected thing that occured.
  • What you were expecting to happen instead.

Including sensitive data in your issue reports

In most cases we would need to see the debug logs and the database of your raiden node to be able to determine what happened and help debug the issue. If your issue occured on the mainnet and you would like to keep the data private you can encrypt the logs with a special pgp key we provide and send them to RaidenErrors@brainbot.li . A short guide on how to do this follows:

  1. Check this guide for how to install gpg and for general info on how to encrypt and decrypt files.
  2. Get the raiden pgp key from github.
  3. Import the key by: $: gpg --import path/to/raiden_errors_key.gpg
  4. Find the relevant debug logs and the DB and put them in an archive: $: tar -cvf debug_data.tar path/to/debug.log path/to/raiden.db
  5. Encrypt the archive with our public key: $: gpg -e -u "Enter your name here" -r "RaidenErrors@brainbot.li" debug_data.tar
  6. You should now have a file called debug_data.tar.gpg in your current directory.
  7. Take the above file and email it to RaidenErrors@brainbot.li

Last updated: 20th of December 2018

Privacy Notice - Sending sensitive data

This Privacy Notice informs you about the handling of personal data in case you chose to send us your notes, log-files and/or databases for error tracking and debugging through the process described above.

Introduction

Brainbot labs Establishment have appointed a data protection officer (DPO) specifically for handling these data and ensuring their privacy.

Data handling within the Raiden team

  • The DPO or his deputy provides the data to the respective developer responsible for analyzing the data.
  • After the analysis, the DPO or his deputy will ensure that the data will be deleted from the computers used for the analysis as well as from any email server used by brainbot labs Establishment to forward the information.

Personal data collected Due to the fact that the user chooses the content of the files and the databases sent to brainbot labs Establishment, the personal data contained lies in the responsibility of the user who is submitting the data. However, if the user chooses to send standard log-files generated by the Raiden client the personal data will typically comprise:

  • Ethereum addresses
  • Token addresses and balances
  • Raiden transfer data

Usage of data collected The data collected from the users according to the process described above will only be used for analyzing errors reported by the user and as support for fixing the respective bugs. After releasing the corresponding bug fix, the data will be deleted according to the process as described above.

Creating a Pull Request

If you have some coding abilities and would like to contribute to the actual codebase of Raiden then you can open a Pull Request(PR) against the repository.

All PRs should be:

  • Self-contained.
  • As short as possible and address a single issue or even a part of an issue. Consider breaking long PRs into smaller ones.

In order for a Pull Request to get merged into the main repository you should have one approved review from one of the core developers of Raiden and also all Continuous Integration tests should be passing and the CI build should be green.

Additionally you need to sign the raiden project CLA (Contributor License Agreement). Our CLA bot will help you with that after you created a pull request. If you or your employer do not hold the whole copyright of the authorship submitted we can not accept your contribution.

When you make a PR that's not ready for a review, open a Draft PR. To open a Draft PR, find a small triangle on the green button. Otherwise, if you open a non-Draft PR, a reviewer will be automatically assigned to your PR.

Pull Request Reviews

It is the responsibility of the author to ask for at least one person to review their Pull Request. That person should know the area of the code being changed. If the chosen reviewer does not feel confident in the review, they can then ask for someone else to additionally look at the code.

All the developers in the team should perform Pull Request reviews. Make it a habit to check this link often to help your fellow colleagues who have PRs open pending for review.

We have a lot of tools that automatically check the quality of the code (flake8, mypy, pylint). All these are automatically ran by the CI. Therefore fixes related to linting are not part of PR reviews. Additionally, reviewers are encouraged to not be nitpicky about the suggested changes they ask from the PR author. If something is indeed nitpicky then the reviewer is encouraged to state it beforehand. Example:

nitpick: I don't really think XYZ makes sense here. If possible it would be nice to have it changed to KLM

The author of the PR can then choose to implement the nitpicks or ignore them.

PR authors should make pull request reviews easier. Make them as small as possible and even if some code is touched it does not mean that it needs to be refactored. For example don't mix style/typing changes with a big PR.

When a reviewer starts a PR review he should write a comment in the PR stating he is doing so. For example:

Reviewing this now

This is to keep track of who is reviewing a PR and to also know when a PR review is ongoing.

For complicated PRs that touch the core of the protocol at least 2 core developers are recommended to have a look and provide an opinion.

When performing a PR review of non trivial PRs it is recommended to clone the branch locally, explore the changes with your editor, run tests and experiment with the changes so that a better understanding of the code change can be achieved and good constructive feedback given back to the author.

Development environment setup

Nix

See nix.rst.

Development external dependencies

These are the required external dependencies for development:

  • Python >=3.8.
  • Ethereum client (Geth recommended).
  • Solidity compiler >=0.4.23.
  • A matrix capable server (Synapse recommended).
  • A compiler tool chain to compile the eliptic curve library.
  • Git for version control.

Get the code

Start by getting the source code

git clone https://github.com/raiden-network/raiden.git
cd raiden

Install system wide dependencies

Archlinux

All the required packages are available in community or extra:

sudo pacman -Sy go-ethereum python python-pip python-virtualenv solidity \
base-devel git
Debian/Ubuntu

To get the Geth client and solidity compiler add Ethereum's PPA repository:

sudo add-apt-repository -y ppa:ethereum/ethereum
sudo apt-get update

Then install the required packages:

sudo apt-get install build-essential git libffi-dev libgmp-dev libssl-dev \
libtool pkg-config python3 python3-pip python3-dev python3-virtualenv \
ethereum solc git

Create the virtual env

It's highly recommended to use a virtual environment for development. This will isolate the python dependencies used by Raiden from your system:

virtualenv env
source ./env/bin/activate

Development dependencies

Install the development dependencies:

make install-dev

Development Guidelines

Development and release branches

The primary development branch is develop. It is used for testing and CI and major releases are based on it. A release branch is created only as needed, when the time comes for the first patch. In other words, there is no release branch for 2.0.0, but once a change is identified as a candidate for a patch, the release branch is created and the change is cherry-picked from develop to the release branch. The same release branch is used to host all patch releases for the corresponding major release.

Testing

To run the tests use pytest

pytest raiden

Tests are split in unit tests, fuzz tests (which are currently grouped under unit tests) and integration tests. The first are faster to execute while the latter test the whole system but are slower to run. To choose which type of tests to run, just use the appropriate directory.

pytest raiden/tests/<integration|unit>

For a detailed explanation of the different types of tests, see the test suite section below.

Committing Rules

For an exhaustive guide read this guide. It's all really good advice. Some rules that you should always follow though are:

  • A commit title not exceeding 50 characters
  • A blank line after the title (optional if there is no description)
  • A description of what the commit did (optional if the commit is really small)

Why are these rules important? All tools that consume git repos and show you information treat the first 80 characters as a title. Even Github itself does this. And the git history looks really nice and neat if these simple rules are followed.

Encoding

Addresses should follow "sandwich encoding" so that each point of entry does its own encoding into binary but the core programmatic API accepts only binary. Thus we setup the following rules:

  • Programmatic API should only expect binary and break if it accepts anything else. It should do type checking on its input and provide meaningful error for hex encoded addresses or length mismatch.
  • All other places from which we receive addresses need to do their own encoding from whatever the input encoding is to binary. Such places are: CLI, Rest-API e.t.c.
  • All modules which generate output to the outside world should also encode from binary to whatever the expected output encoding of that module is. Such places are stdout, communication with the ethereum node e.t.c.

Coding Style

In this section we are going to describe the coding rules for contributing to the raiden repository. All code you write should strive to comply with these rules.

General Style

In this section we are going to see style rules that should be followed across all languages.

Python

Raiden is written in Python and we follow the official Python style guide PEP8. For formatting, we use the automatic formatter black. The configuration options for black are set in pyproject.toml, so a simple run of

make black

will ensure that you to comply with our formatting rules.

It is highly recommended to also use the other linting tools, in order to automatically determine any and all style violations. The customizable part of pylint is at pylintrc.

All pull requests need to pass

make lint

Line Length

Flake8 will warn you for 99 characters which is the hard limit on the max length. Try not to go above it. We also have a soft limit on 80 characters but that is not enforced and is there just to encourage short lines.

Sometimes black will reformat lines, so they will go above our hard limit of 99 characters. In such cases, try to break up the expression, e.g.

def testalongline(a):
    mysum = int(
        sum(
            content.value.amount
            for content in a.internal_field.fields_internal_long_named_dictionary_variables.values()
        )
    )
    return mysum

This will fail flake8 test on make lint. Change it to

def testalongline(a):
    mysum = sum(
        content.value.amount
        for content in a.internal_field.fields_internal_long_named_dictionary_variables.values()
    )
    mysum = int(mysum)
    return mysum

Shadowing built-ins

Shadowing built-in names is not allowed. Pylint will also warn you about it. If you want to use a built-in name then add a trailing underscore and not a leading one. Leading ones are reserved for private attributes. For example type -> type_.

Docstrings

For docstrings we follow PEP 0257.

A single line docstring should be like this:

def a(b: B, c: C) -> D:
    """ Here be docs """
    pass

A multiline docstring should have a short title and then a body. So like this:

def a(b: B, c: C) -> D:
    """ Function Title

    body comes
    here
    """
    pass

Naming Convention

Use descriptive variable names and avoid short abbreviations.

The following is bad:

mgr = Manager()
a = AccountBalanceHolder()
s = RaidenService()

While this is good:

manager = Manager()
balance_holder = AccountBalanceHolder()
service = RaidenService()

We try to follow a consistent naming convention throughout the codebase to make it easy for the reader of the code to understand what is going on. Thus we introduce the following rules:

For addresses:

  • <name>_address_hex for hex encoded addresses
  • <name>_address for binary encoded addresses

Lists of objects:

  • <name>s, e.g. channels for a list Channel object instances.

Use [] instead of list() to initialize an empty list.

Mappings/dicts:

If it is a simple one to one mapping

<name>_to_<name>, e.g. tokenaddress_to_taskmanager

If the mapped to object is a list then add an s

<name>_to_<name>s, e.g. tokenaddress_to_taskmanagers = defaultdict(list())

Use {} instead of dict() to initialize an empty dict.

Class attributes and functions:

All class members should be private by default and they should start with a leading _. Whatever is part of the interface of the class should not have the leading underscore. The public interface of the class is what we should be testing in our tests and it should be the only way other parts of the code use the class through.

Minimal Example:

class Diary:

    def __init__(self, entries: List[str]) -> None:
        self._entries = entries

    def entry(index: int) -> str:
        return _entries[index]

(note the typing of __init__(...) -> None)

NewTypes and type comparisons

For often used types it makes sense to define new types using the typing.NewType function. New type names should be capitalized.

Address = NewType('Address', bytes)

These type definitions can not be used for type comparisons. To make this possible always define an associated alias, which must start with T_.

T_Address = bytes

typing.Optional convention

For typing.Optional we follow the convention that if the argument has a default value of None then we should omit the use of typing.Optional[].

Good Example:

def foo(a: int = None) -> ReturnType

Bad Example:

def foo(a: typing.Optional[int] = None)

imports

Classes must be imported in the global namespace, unless there are name collisions and module imports can be used. Example:

import a
from b import Normal


class Conflict:
    pass


def f() -> Tuple[a.Conflict, Normal]:
    return a.Conflict(), Normal()

Solidity

For solidity we generally follow the style guide as shown in the solidity documentation with a few notable exceptions:

Variable Names

All variable name should be in snake case, just like in python. Function names on the other hand should be mixedCase. MixedCase is essentially like CamelCase but with the initial letter being a small letter. This helps us to easily determine which function calls are smart contract calls in the python code side.

function iDoSomething(uint awesome_argument) {
    doSomethingElse();
}

Documentation

Code should be documented. For docstrings the Google conventions are used.

Test suite

The test suite is divided into unit tests, integration tests and fuzz tests. In addition to that, there is the scenario runner tool for acceptance testing. The parts of the test suite differ in scope:

Fuzz tests

The fuzz tests have the smallest scope, they only test the core state machine. They are a randomized, model-based test of the state machine using hypothesis.stateful.

Every hypothesis rule defined for the fuzz test (i. e. every method decorated with @rule) performs exactly one state change (i. e, one call to the state machine's transition function, raiden.node.state_transition).

Unit tests

Many unit tests also test the proper processing of one or several state changes, but they may also test just details of the state machine or other parts of the code such as the api. The central RaidenService class and the interaction with the transport layer and the smart contracts are not in the scope of the unit tests.

Integration tests

The scope of the integration tests is the whole client. All integration tests require instantiating one or more RaidenService instances and providing the transport layer (e. g. by firing up a local Matrix server), which takes considerable time.

As a general rule, a test should only be made an integration test if the tested actions touch the transport layer. An exception to this are the tests related to the smart contracts/smart contract proxies found in raiden.tests.integration.contracts.

Test utilities

The utilities in raiden.tests.utils are used by all parts of the test suite.

In order to simplify the creation of objects for tests (some objects require more than ten initialization parameters), there is the extensive raiden.tests.utils.factories module. For each complicated object type there is a properties class which can be passed to a universal create function, which will fill in appropriate defaults for each unspecified parameter. See the documentation of raiden.tests.utils.factories.Properties for further details.

The raiden.tests.utils.detect_failure module contains two decorators for integration tests that run RaidenService instances. The use of one of the decorators is required on each such test and pytest will enforce it. @raise_on_failure will make sure every crash of a RaidenService leads to instant failure of the test, and @expect_failure indicates that we actually expect the crash of a RaidenService instance within the test and want to ignore it.

Sometimes a test using @raise_on_failure needs to restart nodes or start new ones during the test. Simply calling Raiden.start to achieve this will make @raise_on_failure lose track of the app. Therefore, such tests should always use the restart_node fixture and call restart_node(app) instead.

Workflow

When developing a feature, or a bug fix you should always start by writing a test for it, or by modifying existing tests to test for your feature. Once you see that test failing you should implement the feature and confirm that all your new tests pass.

Your addition to the test suite should call into the innermost level possible to test your feature/bugfix. In particular, integration tests should be avoided in favor of unit tests whenever possible.

Afterwards you should open a Pull Request from your fork or feature branch against develop. You will be given feedback from the core developers of raiden and you should try to incorporate that feedback into your branch. Once you do so and all tests pass your feature/fix will be merged.

Updating documentation/spec

If a PR you are working on is updating something in the REST API, then make sure to also update the respective documentation. The documentation update can also be in a different PR but should be taken care of and linked to the initial PR.

If a PR you are working on is making a protocol change then make sure to also update the specification. The spec update can also be introduced in a different PR but should be taken care of and linked to the initial PR.

Contributing to other people's PRs

If you are a core developer of Raiden with write privileges to the repository then you can add commits or rebase to develop any Pull Request by other people.

Let us take this PR as an example. The contributor has everything ready and all is looking good apart from a minor glitch. You can wait until he fixes it himself but you can always help him by contributing to his branch's PR:

git remote add hackaugusto git@github.com:hackaugusto/raiden.git
git fetch hackaugusto
git checkout travis_build

Right now you are working on the contributor's Pull Request. Make sure to coordinate to avoid any conflicts and always warn people beforehand if you are to work on their branch. Once you are done:

git commit -m 'Add my contribution

The PR was missing something. I added it.'
git push hackaugusto travis_build

Congratulations, you have added to someone else's PR!

Integrating Pull Requests

When integrating a successful Pull Request into the codebase we have the option of using either a "Rebase and Merge" or to "Create a Merge commit". Unfortunately in Github the default option is to "Create a Merge commit". This is not our preferred option as in this way we can't be sure that the result of the merge will also have all tests passing, since there may be other patches merged since the PR opened. But there are many PRs which we definitely know won't have any conflicts and for which enforcing rebase would make no sense and only waste our time. As such we provide the option to use both at our own discretion. So the general guidelines are:

  • If there are patches that have been merged to develop since the PR was opened, on top of which our current PR may have different behaviour then use Rebase and Merge.
  • If there are patches that have been merged to develop since the PR was opened which touch documentation, infrastucture or completely unrelated parts of the code then you can freely use Create a Merge Commit and save the time of rebasing.

Profiling

The code can be profiled by creating a flamegraph of the codepaths showing how much time is spent in each part of the code. To do that you will need the flamegraph package.

Profiling a normal run

A normal run of raiden can be profiled by providing the --flamegraph /path/to/dir option. For example:

python -m raiden --eth-rpc-endpoint http://parity.goerli.ethnodes.brainbot.com:8545 --accept-disclaimer --network-id goerli --keystore-path ~/test_keystore --flamegraph ./TEMP

The /path/to/dir needs to be a directory where the flamegraph stack data will be saved.

Subsequently you can render them to an .svg by doing flamegraph.pl /path/to/dir/filename_stack.data > flamegraph.svg

Profiling a test run

A test run can be profiled by providing an extra argument to pytest --profiler=flamegraph-trace. For example:

pytest --profiler=flamegraph-trace -xs raiden/tests/integration/api/test_restapi.py::test_api_payments

Will generate stack data under /tmp/datetime_stack.data. Just as before you can render it into a flamegreaph by doing flamegraph.pl /tmp/datetime_stack.data.

Analyzing Profiling data

You can open the resulting .svg file via any viewer capable of reading svg and analyze how much time is spent in each part of the code by studying the flamegraph.